chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
For example group I elements: Li, Na, K, Cu, Ag, and Au have a single valence electron. (Figure below) These elements all have similar chemical properties. These atoms readily give away one electron to react with other elements. The ability to easily give away an electron makes these elements excellent conductors.
Periodic table group IA elements: Li, Na, and K, and group IB elements: Cu, Ag, and Au have one electron in the outer, or valence, shell, which is readily donated. Inner shell electrons: For n= 1, 2, 3, 4; 2n2 = 2, 8, 18, 32.
Group VIIA elements: Fl, Cl, Br, and I all have 7 electrons in the outer shell. These elements readily accept an electron to fill up the outer shell with a full 8 electrons. (Figure below) If these elements do accept an electron, a negative ion is formed from the neutral atom. These elements which do not give up electrons are insulators.
Periodic table group VIIA elements: F, Cl, Br, and I with 7 valence electrons readily accept an electron in reactions with other elements.
For example, a Cl atom accepts an electron from an Na atom to become a Cl- ion as shown in Figure below. An ion is a charged particle formed from an atom by either donating or accepting an electron. As the Na atom donates an electron, it becomes a Na+ ion. This is how Na and Cl atoms combine to form NaCl, table salt, which is actually Na+Cl-, a pair of ions. The Na+ and Cl- carrying opposite charges, attract one other.
Neutral Sodium atom donates an electron to neutral Chlorine atom forming Na+ and Cl- ions.
Sodium chloride crystallizes in the cubic structure shown in Figure below. This model is not to scale to show the three dimensional structure. The Na+Cl- ions are actually packed similar to layers of stacked marbles. The easily drawn cubic crystal structure illustrates that a solid crystal may contain charged particles.
Group VIIIA elements: He, Ne, Ar, Kr, Xe all have 8 electrons in the valence shell. (Figure below) That is, the valence shell is complete meaning these elements neither donate nor accept electrons. Nor do they readily participate in chemical reactions since group VIIIA elements do not easily combine with other elements. In recent years chemists have forced Xe and Kr to form a few compounds, however for the purposes of our discussion this is not applicable. These elements are good electrical insulators and are gases at room temperature.
Group VIIIA elements: He, Ne, Ar, Kr, Xe are largely unreactive since the valence shell is complete.
Group IVA elements: C, Si, Ge, having 4 electrons in the valence shell as shown in Figure below form compounds by sharing electrons with other elements without forming ions. This shared electron bonding is known as covalent bonding. Note that the center atom (and the others by extension) has completed its valence shell by sharing electrons. Note that the figure is a 2-d representation of bonding, which is actually 3-d. It is this group, IVA, that we are interested in for its semiconducting properties.
(a) Group IVA elements: C, Si, Ge having 4 electrons in the valence shell, (b) complete the valence shell by sharing electrons with other elements.
Crystal structure: Most inorganic substances form their atoms (or ions) into an ordered array known as a crystal. The outer electron clouds of atoms interact in an orderly manner. Even metals are composed of crystals at the microscopic level. If a metal sample is given an optical polish, then acid etched, the microscopic microcrystalline structure shows as in Figure below. It is also possible to purchase, at considerable expense, metallic single crystal specimens from specialized suppliers. Polishing and etching such a specimen discloses no microcrystalline structure. Practically all industrial metals are polycrystalline. Most modern semiconductors, on the other hand, are single crystal devices. We are primarily interested in monocrystalline structures.
(a) Metal sample, (b) polished, (c) acid etched to show microcrystalline structure.
Many metals are soft and easily deformed by the various metal working techniques. The microcrystals are deformed in metal working. Also, the valence electrons are free to move about the crystal lattice, and from crystal to crystal. The valence electrons do not belong to any particular atom, but to all atoms.
The rigid crystal structure in Figure below is composed of a regular repeating pattern of positive Na ions and negative Cl ions. The Na and Cl atoms form Na+ and Cl- ions by transferring an electron from Na to Cl, with no free electrons. Electrons are not free to move about the crystal lattice, a difference compared with a metal. Nor are the ions free. Ions are fixed in place within the crystal structure. Though, the ions are free to move about if the NaCl crystal is dissolved in water. However, the crystal no longer exists. The regular, repeating structure is gone. Evaporation of the water deposits the Na+ and Cl- ions in the form of new crystals as the oppositely charged ions attract each other. Ionic materials form crystal structures due to the strong electrostatic attraction of the oppositely charged ions.
NaCl crystal having a cubic structure.
Semiconductors in Group 14 (formerly part of Group IV) form a tetrahedral bonding pattern utilizing the s and p orbital electrons about the atom, sharing electron-pair bonds to four adjacent atoms. (Figure below(a) ). Group 14 elements have four outer electrons: two in a spherical s-orbital and two in p-orbitals. One of the p-orbitals is unoccupied. The three p-orbitals hybridize with the s-orbital to form four sp3 molecular orbitals. These four electron clouds repel one another to equidistant tetrahedral spacing about the Si atom, attracted by the positive nucleus as shown in Figure below.
One s-orbital and three p-orbital electrons hybridize, forming four sp3 molecular orbitals.
Every semiconductor atom, Si, Ge, or C (diamond) is chemically bonded to four other atoms by covalent bonds, shared electron bonds. Two electrons may share an orbital if each have opposite spin quantum numbers. Thus, an unpaired electron may share an orbital with an electron from another atom. This corresponds to overlapping Figure below(a) of the electron clouds, or bonding. Figure below (b) is one fourth of the volume of the diamond crystal structure unit cell shown in Figure below at the origin. The bonds are particularly strong in diamond, decreasing in strength going down group IV to silicon, and germanium. Silicon and germanium both form crystals with a diamond structure.
(a) Tetrahedral bonding of Si atom. (b) leads to 1/4 of the cubic unit cell
The diamond unit cell is the basic crystal building block. Figure below shows four atoms (dark) bonded to four others within the volume of the cell. This is equivalent to placing one of Figure above(b) at the origin in Figure below, then placing three more on adjacent faces to fill the full cube. Six atoms fall on the middle of each of the six cube faces, showing two bonds. The other two bonds to adjacent cubes were omitted for clarity. Out of eight cube corners, four atoms bond to an atom within the cube. Where are the other four atoms bonded? The other four bond to adjacent cubes of the crystal. Keep in mind that even though four corner atoms show no bonds in the cube, all atoms within the crystal are bonded in one giant molecule. A semiconductor crystal is built up from copies of this unit cell.
Si, Ge, and C (diamond) form interleaved face centered cube.
The crystal is effectively one molecule. An atom covalent bonds to four others, which in turn bond to four others, and so on. The crystal lattice is relatively stiff resisting deformation. Few electrons free themselves for conduction about the crystal. A property of semiconductors is that once an electron is freed, a positively charged empty space develops which also contributes to conduction.
REVIEW
• Atoms try to form a complete outer, valence, shell of 8-electrons (2-electrons for the innermost shell). Atoms may donate a few electrons to expose an underlying shell of 8, accept a few electrons to complete a shell, or share electrons to complete a shell.
• Atoms often form ordered arrays of ions or atoms in a rigid structure known as a crystal.
• A neutral atom may form a positive ion by donating an electron.
• A neutral atom may form a negative ion by accepting an electron
• The group IVA semiconductors: C, Si, Ge crystallize into a diamond structure. Each atom in the crystal is part of a giant molecule, bonding to four other atoms.
• Most semiconductor devices are manufactured from single crystals. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.03%3A_Valence_and_Crystal_Structure.txt |
Like spectators in an amphitheater moving between seats and rows, electrons may change their statuses, given the presence of available spaces for them to fit, and available energy. Since shell level is closely related to the amount of energy that an electron possesses, “leaps” between shell (and even subshell) levels requires transfers of energy. If an electron is to move into a higher-order shell, it requires that additional energy be given to the electron from an external source. Using the amphitheater analogy, it takes an increase in energy for a person to move into a higher row of seats, because that person must climb to a greater height against the force of gravity. Conversely, an electron “leaping” into a lower shell gives up some of its energy, like a person jumping down into a lower row of seats, the expended energy manifesting as heat and sound.
Not all “leaps” are equal. Leaps between different shells require a substantial exchange of energy, but leaps between subshells or between orbitals require lesser exchanges.
When atoms combine to form substances, the outermost shells, subshells, and orbitals merge, providing a greater number of available energy levels for electrons to assume. When large numbers of atoms are close to each other, these available energy levels form a nearly continuous band wherein electrons may move as illustrated in Figure below.
Electron band overlap in metallic elements.
It is the width of these bands and their proximity to existing electrons that determines how mobile those electrons will be when exposed to an electric field. In metallic substances, empty bands overlap with bands containing electrons, meaning that electrons of a single atom may move to what would normally be a higher-level state with little or no additional energy imparted. Thus, the outer electrons are said to be “free,” and ready to move at the beckoning of an electric field.
Band overlap will not occur in all substances, no matter how many atoms are close to each other. In some substances, a substantial gap remains between the highest band containing electrons (the so-called valence band) and the next band, which is empty (the so-called conduction band). See Figure below. As a result, valence electrons are “bound” to their constituent atoms and cannot become mobile within the substance without a significant amount of imparted energy. These substances are electrical insulators.
Electron band separation in insulating substances.
Materials that fall within the category of semiconductors have a narrow gap between the valence and conduction bands. Thus, the amount of energy required to motivate a valence electron into the conduction band where it becomes mobile is quite modest. (Figure below)
Electron band separation in semiconducting substances, (a) multitudes of semiconducting close atoms still results in a significant band gap, (b) multitudes of close metal atoms for reference.
At low temperatures, little thermal energy is available to push valence electrons across this gap, and the semiconducting material acts more as an insulator. At higher temperatures, though, the ambient thermal energy becomes enough to force electrons across the gap, and the material will increase conduction of electricity.
It is difficult to predict the conductive properties of a substance by examining the electron configurations of its constituent atoms. Although the best metallic conductors of electricity (silver, copper, and gold) all have outer s subshells with a single electron, the relationship between conductivity and valence electron count is not necessarily consistent:
The electron band configurations produced by compounds of different elements defies easy association with the electron configurations of its constituent elements.
Review
• Energy is required to remove an electron from the valence band to a higher unoccupied band, a conduction band. More energy is required to move between shells, less between subshells.
• Since the valence and conduction bands overlap in metals, little energy removes an electron. Metals are excellent conductors.
• The large gap between the valence and conduction bands of an insulator requires high energy to remove an electron. Thus, insulators do not conduct.
• Semiconductors have a small non-overlapping gap between the valence and conduction bands. Pure semiconductors are neither good insulators nor conductors. Semiconductors are semi-conductive. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.04%3A_Band_Theory_of_Solids.txt |
Figure below (a) shows four electrons in the valence shell of a semiconductor forming covalent bonds to four other atoms. This is a flattened, easier to draw, version of Figure above. All electrons of an atom are tied up in four covalent bonds, pairs of shared electrons. Electrons are not free to move about the crystal lattice. Thus, intrinsic, pure, semiconductors are relatively good insulators as compared to metals.
(a) Intrinsic semiconductor is an insulator having a complete electron shell. (b) However, thermal energy can create few electron hole pairs resulting in weak conduction.
Thermal energy may occasionally free an electron from the crystal lattice as in Figure above (b). This electron is free for conduction about the crystal lattice. When the electron was freed, it left an empty spot with a positive charge in the crystal lattice known as a hole. This hole is not fixed to the lattice; but, is free to move about. The free electron and hole both contribute to conduction about the crystal lattice. That is, the electron is free until it falls into a hole. This is called recombination. If an external electric field is applied to the semiconductor, the electrons and holes will conduct in opposite directions. Increasing temperature will increase the number of electrons and holes, decreasing the resistance. This is opposite of metals, where resistance increases with temperature by increasing the collisions of electrons with the crystal lattice. The number of electrons and holes in an intrinsic semiconductor are equal. However, both carriers do not necessarily move with the same velocity with the application of an external field. Another way of stating this is that the mobility is not the same for electrons and holes.
Pure semiconductors, by themselves, are not particularly useful. Though, semiconductors must be refined to a high level of purity as a starting point prior the addition of specific impurities.
Semiconductor material pure to 1 part in 10 billion, may have specific impurities added at approximately 1 part per 10 million to increase the number of carriers. The addition of a desired impurity to a semiconductor is known as doping. Doping increases the conductivity of a semiconductor so that it is more comparable to a metal than an insulator.
It is possible to increase the number of negative charge carriers within the semiconductor crystal lattice by doping with an electron donor like Phosphorus. Electron donors, also known as N-type dopants include elements from group VA of the periodic table: nitrogen, phosphorus, arsenic, and antimony. Nitrogen and phosphorus are N-type dopants for diamond. Phosphorus, arsenic, and antimony are used with silicon.
The crystal lattice in Figure below (b) contains atoms having four electrons in the outer shell, forming four covalent bonds to adjacent atoms. This is the anticipated crystal lattice. The addition of a phosphorus atom with five electrons in the outer shell introduces an extra electron into the lattice as compared with the silicon atom. The pentavalent impurity forms four covalent bonds to four silicon atoms with four of the five electrons, fitting into the lattice with one electron left over. Note that this spare electron is not strongly bonded to the lattice as the electrons of normal Si atoms are. It is free to move about the crystal lattice, not being bound to the Phosphorus lattice site. Since we have doped at one part phosphorus in 10 million silicon atoms, few free electrons were created compared with the numerous silicon atoms. However, many electrons were created compared with the fewer electron-hole pairs in intrinsic silicon. Application of an external electric field produces strong conduction in the doped semiconductor in the conduction band (above the valence band). A heavier doping level produces stronger conduction. Thus, a poorly conducting intrinsic semiconductor has been converted into a good electrical conductor.
(a) Outer shell electron configuration of donor N-type Phosphorus, Silicon (for reference), and acceptor P-type Boron. (b) N-type donor impurity creates free electron (c) P-type acceptor impurity creates hole, a positive charge carrier.
It is also possible to introduce an impurity lacking an electron as compared with silicon, having three electrons in the valence shell as compared with four for silicon. In Figure above (c), this leaves an empty spot known as a hole, a positive charge carrier. The boron atom tries to bond to four silicon atoms, but only has three electrons in the valence band. In attempting to form four covalent bonds the three electrons move around trying to form four bonds. This makes the hole appear to move. Furthermore, the trivalent atom may borrow an electron from an adjacent (or more distant) silicon atom to form four covalent bonds. However, this leaves the silicon atom deficient by one electron. In other words, the hole has moved to an adjacent (or more distant) silicon atom. Holes reside in the valence band, a level below the conduction band. Doping with an electron acceptor, an atom which may accept an electron, creates a deficiency of electrons, the same as an excess of holes. Since holes are positive charge carriers, an electron acceptor dopant is also known as a P-type dopant. The P-type dopant leaves the semiconductor with an excess of holes, positive charge carriers. The P-type elements from group IIIA of the periodic table include: boron, aluminum, gallium, and indium. Boron is used as a P-type dopant for silicon and diamond semiconductors, while indium is used with germanium.
The “marble in a tube” analogy to electron conduction in Figure below relates the movement of holes with the movement of electrons. The marble represent electrons in a conductor, the tube. The movement of electrons from left to right as in a wire or N-type semiconductor is explained by an electron entering the tube at the left forcing the exit of an electron at the right. Conduction of N-type electrons occurs in the conduction band. Compare that with the movement of a hole in the valence band.
Marble in a tube analogy: (a) Electrons move right in the conduction band as electrons enter tube. (b) Hole moves right in the valence band as electrons move left.
For a hole to enter at the left of Figure above (b), an electron must be removed. When moving a hole left to right, the electron must be moved right to left. The first electron is ejected from the left end of the tube so that the hole may move to the right into the tube. The electron is moving in the opposite direction of the positive hole. As the hole moves farther to the right, electrons must move left to accommodate the hole. The hole is the absence of an electron in the valence band due to P-type doping. It has a localized positive charge. To move the hole in a given direction, the valence electrons move in the opposite direction.
Electron flow in an N-type semiconductor is similar to electrons moving in a metallic wire. The N-type dopant atoms will yield electrons available for conduction. These electrons, due to the dopant are known as majority carriers, for they are in the majority as compared to the very few thermal holes. If an electric field is applied across the N-type semiconductor bar in Figure below (a), electrons enter the negative (left) end of the bar, traverse the crystal lattice, and exit at right to the (+) battery terminal.
(a) N-type semiconductor with electrons moving left to right through the crystal lattice. (b) P-type semiconductor with holes moving left to right, which corresponds to electrons moving in the opposite direction.
Current flow in a P-type semiconductor is a little more difficult to explain. The P-type dopant, an electron acceptor, yields localized regions of positive charge known as holes. The majority carrier in a P-type semiconductor is the hole. While holes form at the trivalent dopant atom sites, they may move about the semiconductor bar. Note that the battery in Figure above (b) is reversed from (a). The positive battery terminal is connected to the left end of the P-type bar. Electron flow is out of the negative battery terminal, through the P-type bar, returning to the positive battery terminal. An electron leaving the positive (left) end of the semiconductor bar for the positive battery terminal leaves a hole in the semiconductor, that may move to the right. Holes traverse the crystal lattice from left to right. At the negative end of the bar an electron from the battery combines with a hole, neutralizing it. This makes room for another hole to move in at the positive end of the bar toward the right. Keep in mind that as holes move left to right, that it is actually electrons moving in the opposite direction that is responsible for the apparant hole movement.
The elements used to produce semiconductors are summarized in Figure below. The oldest group IVA bulk semiconductor material germanium is only used to a limited extent today. Silicon based semiconductors account for about 90% of commercial production of all semiconductors. Diamond based semiconductors are a research and development activity with considerable potential at this time. Compound semiconductors not listed include silicon germanium (thin layers on Si wafers), silicon carbide and III-V compounds such as gallium arsenide. III-VI compound semiconductors include: AlN, GaN, InN, AlP, AlAs, AlSb, GaP, GaAs, GaSb, InP, InAs, InSb, AlxGa1-xAs and InxGa1-xAs. Columns II and VI of periodic table, not shown in the figure, also form compound semiconductors.
Group IIIA P-type dopants, group IV basic semiconductor materials, and group VA N-type dopants.
The main reason for the inclusion of the IIIA and VA groups in Figure above is to show the dopants used with the group IVA semiconductors. Group IIIA elements are acceptors, P-type dopants, which accept electrons leaving a hole in the crystal lattice, a positive carrier. Boron is the P-type dopant for diamond, and the most common dopant for silicon semiconductors. Indium is the P-type dopant for germanium.
Group VA elements are donors, N-type dopants, yielding a free electron. Nitrogen and Phosphorus are suitable N-type dopants for diamond. Phosphorus and arsenic are the most commonly used N-type dopants for silicon; though, antimony can be used.
Review
• Intrinsic semiconductor materials, pure to 1 part in 10 billion, are poor conductors.
• N-type semiconductor is doped with a pentavalent impurity to create free electrons. Such a material is conductive. The electron is the majority carrier.
• P-type semiconductor, doped with a trivalent impurity, has an abundance of free holes. These are positive charge carriers. The P-type material is conductive. The hole is the majority carrier.
• Most semiconductors are based on elements from group IVA of the periodic table, silicon being the most prevalent. Germanium is all but obsolete. Carbon (diamond) is being developed.
• Compound semiconductors such as silicon carbide (group IVA) and gallium arsenide (group III-V) are widely used. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.05%3A_Electrons_and_%E2%80%9Choles%E2%80%99%E2%80%99.txt |
However, a single semiconductor crystal manufactured with P-type material at one end and N-type material at the other in Figure below (b) has some unique properties. The P-type material has positive majority charge carriers, holes, which are free to move about the crystal lattice. The N-type material has mobile negative majority carriers, electrons. Near the junction, the N-type material electrons diffuse across the junction, combining with holes in P-type material. The region of the P-type material near the junction takes on a net negative charge because of the electrons attracted. Since electrons departed the N-type region, it takes on a localized positive charge. The thin layer of the crystal lattice between these charges has been depleted of majority carriers, thus, is known as the depletion region. It becomes nonconductive intrinsic semiconductor material. In effect, we have nearly an insulator separating the conductive P and N doped regions.
(a) Blocks of P and N semiconductor in contact have no exploitable properties. (b) Single crystal doped with P and N type impurities develops a potential barrier.
This separation of charges at the PN junction constitutes a potential barrier. This potential barrier must be overcome by an external voltage source to make the junction conduct. The formation of the junction and potential barrier happens during the manufacturing process. The magnitude of the potential barrier is a function of the materials used in manufacturing. Silicon PN junctions have a higher potential barrier than germanium junctions.
In Figure below(a) the battery is arranged so that the negative terminal supplies electrons to the N-type material. These electrons diffuse toward the junction. The positive terminal removes electrons from the P-type semiconductor, creating holes that diffuse toward the junction. If the battery voltage is great enough to overcome the junction potential (0.6V in Si), the N-type electrons and P-holes combine annihilating each other. This frees up space within the lattice for more carriers to flow toward the junction. Thus, currents of N-type and P-type majority carriers flow toward the junction. The recombination at the junction allows a battery current to flow through the PN junction diode. Such a junction is said to be forward biased.
(a) Forward battery bias repels carriers toward junction, where recombination results in battery current. (b) Reverse battery bias attracts carriers toward battery terminals, away from junction. Depletion region thickness increases. No sustained battery current flows.
If the battery polarity is reversed as in Figure above(b) majority carriers are attracted away from the junction toward the battery terminals. The positive battery terminal attracts N-type majority carriers, electrons, away from the junction. The negative terminal attracts P-type majority carriers, holes, away from the junction. This increases the thickness of the nonconducting depletion region. There is no recombination of majority carriers; thus, no conduction. This arrangement of battery polarity is called reverse bias.
The diode schematic symbol is illustrated in Figure below(b) corresponding to the doped semiconductor bar at (a). The diode is a unidirectional device. Electron current only flows in one direction, against the arrow, corresponding to forward bias. The cathode, bar, of the diode symbol corresponds to N-type semiconductor. The anode, arrow, corresponds to the P-type semiconductor. To remember this relationship, Not-pointing (bar) on the symbol corresponds to N-type semiconductor. Pointing (arrow) corresponds to P-type.
(a) Forward biased PN junction, (b) Corresponding diode schematic symbol (c) Silicon Diode I vs V characteristic curve.
If a diode is forward biased as in Figure above(a), current will increase slightly as voltage is increased from 0 V. In the case of a silicon diode a measurable current flows when the voltage approaches 0.6 V in Figure above(c). As the voltage increases past 0.6 V, current increases considerably after the knee. Increasing the voltage well beyond 0.7 V may result in high enough current to destroy the diode. The forward voltage, VF, is a characteristic of the semiconductor: 0.6 to 0.7 V for silicon, 0.2 V for germanium, a few volts for Light Emitting Diodes (LED). The forward current ranges from a few mA for point contact diodes to 100 mA for small signal diodes to tens or thousands of amperes for power diodes.
If the diode is reverse biased, only the leakage current of the intrinsic semiconductor flows. This is plotted to the left of the origin in Figure above(c). This current will only be as high as 1 µA for the most extreme conditions for silicon small signal diodes. This current does not increase appreciably with increasing reverse bias until the diode breaks down. At breakdown, the current increases so greatly that the diode will be destroyed unless a high series resistance limits current. We normally select a diode with a higher reverse voltage rating than any applied voltage to prevent this. Silicon diodes are typically available with reverse break down ratings of 50, 100, 200, 400, 800 V and higher. It is possible to fabricate diodes with a lower rating of a few volts for use as voltage standards.
We previously mentioned that the reverse leakage current of under a µA for silicon diodes was due to conduction of the intrinsic semiconductor. This is the leakage that can be explained by theory. Thermal energy produces few electron hole pairs, which conduct leakage current until recombination. In actual practice this predictable current is only part of the leakage current. Much of the leakage current is due to surface conduction, related to the lack of cleanliness of the semiconductor surface. Both leakage currents increase with increasing temperature, approaching a µA for small silicon diodes.
For germanium, the leakage current is orders of magnitude higher. Since germanium semiconductors are rarely used today, this is not a problem in practice.
Review
• PN junctions are fabricated from a monocrystalline piece of semiconductor with both a P-type and N-type region in proximity at a junction.
• The transfer of electrons from the N side of the junction to holes annihilated on the P side of the junction produces a barrier voltage. This is 0.6 to 0.7 V in silicon, and varies with other semiconductors.
• A forward biased PN junction conducts a current once the barrier voltage is overcome. The external applied potential forces majority carriers toward the junction where recombination takes place, allowing current flow.
• A reverse biased PN junction conducts almost no current. The applied reverse bias attracts majority carriers away from the junction. This increases the thickness of the nonconducting depletion region.
• Reverse biased PN junctions show a temperature dependent reverse leakage current. This is less than a µA in small silicon diodes. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.06%3A_The_P-N_Junction.txt |
Selenium oxide rectifiers were used before modern power diode rectifiers became available. These and the Cu2O rectifiers were polycrystalline devices. Photoelectric cells were once made from Selenium.
Before the modern semiconductor era, an early diode application was as a radio frequency detector, which recovered audio from a radio signal. The “semiconductor” was a polycrystalline piece of the mineral galena, lead sulfide, PbS. A pointed metallic wire known as a cat whisker was brought in contact with a spot on a crystal within the polycrystalline mineral. (Figure below) The operator labored to find a “sensitive” spot on the galena by moving the cat whisker about. Presumably, there were P and N-type spots randomly distributed throughout the crystal due to the variability of uncontrolled impurities. Less often the mineral iron pyrites, fools gold, was used, as was the mineral carborundum, silicon carbide, SiC, another detector, part of a Presumably there were P and N-type spots randomly distributed throughout the crystal due to the variability of uncontrolled impurities. Less often the mineral iron pyrites, fools gold, was used, as was the mineral carborundum, silicon carbide, SiC, another detector, part of a foxhole radio, consisted of a sharpened pencil lead bound to a bent safety pin, touching a rusty blue-blade disposable razor blade. These all required searching for a sensitive spot, easily lost because of vibration.
Crystal detector
Replacing the mineral with an N-doped semiconductor (Figure below(a) ) makes the whole surface sensitive, so that searching for a sensitive spot was no longer required. This device was perfected by G.W.Pickard in 1906. The pointed metal contact produced a localized P-type region within the semiconductor. The metal point was fixed in place, and the whole point contact diode encapsulated in a cylindrical body for mechanical and electrical stability. (Figure below(d) ) Note that the cathode bar on the schematic corresponds to the bar on the physical package.
Silicon point contact diodes made an important contribution to radar in World War II, detecting giga-hertz radio frequency echo signals in the radar receiver. The concept to be made clear is that the point contact diode preceded the junction diode and modern semiconductors by several decades. Even to this day, the point contact diode is a practical means of microwave frequency detection because of its low capacitance. Germanium point contact diodes were once more readily available than they are today, being preferred for the lower 0.2 V forward voltage in some applications like self-powered crystal radios. Point contact diodes, though sensitive to a wide bandwidth, have a low current capability compared with junction diodes.
Silicon diode cross-section: (a) point contact diode, (b) junction diode, (c) schematic symbol, (d) small signal diode package.
Most diodes today are silicon junction diodes. The cross-section in Figure above(b) looks a bit more complex than a simple PN junction; though, it is still a PN junction. Starting at the cathode connection, the N+ indicates this region is heavily doped, having nothing to do with polarity. This reduces the series resistance of the diode. The N- region is lightly doped as indicated by the (-). Light doping produces a diode with a higher reverse breakdown voltage, important for high voltage power rectifier diodes. Lower voltage diodes, even low voltage power rectifiers, would have lower forward losses with heavier doping. The heaviest level of doping produces zener diodes designed for a low reverse breakdown voltage. However, heavy doping increases the reverse leakage current. The P+ region at the anode contact is heavily doped P-type semiconductor, a good contact strategy. Glass encapsulated small signal junction diodes are capable of 10’s to 100’s of mA of current. Plastic or ceramic encapsulated power rectifier diodes handle to 1000’s of amperes of current.
Review
• Point contact diodes have superb high-frequency characteristics, usable well into the microwave frequencies.
• Junction diodes range in size from small signal diodes to power rectifiers capable of 1000’s of amperes.
• The level of doping near the junction determines the reverse breakdown voltage. Light doping produces a high voltage diode. Heavy doping produces a lower breakdown voltage, and increases reverse leakage current. Zener diodes have a lower breakdown voltage because of heavy doping. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.07%3A_Junction_Diodes.txt |
The bipolar junction transistor shown in Figure below(a) is an NPN three layer semiconductor sandwich with an emitter and collector at the ends, and a base in between. It is as if a third layer were added to a two layer diode. If this were the only requirement, we would have no more than a pair of back-to-back diodes. In fact, it is far easier to build a pair of back-to-back diodes. The key to the fabrication of a bipolar junction transistor is to make the middle layer, the base, as thin as possible without shorting the outside layers, the emitter, and collector. We cannot over emphasize the importance of the thin base region.
The device in Figure below(a) has a pair of junctions, emitter to base and base to collector, and two depletion regions.
(a) NPN junction bipolar transistor. (b) Apply reverse bias to collector base junction.
It is customary to reverse bias the base-collector junction of a bipolar junction transistor as shown in (Figure above(b). Note that this increases the width of the depletion region. The reverse bias voltage could be a few volts to tens of volts for most transistors. There is no current flow, except leakage current, in the collector circuit.
In Figure below(a), a voltage source has been added to the emitter base circuit. Normally we forward bias the emitter-base junction, overcoming the 0.6 V potential barrier. This is similar to forward biasing a junction diode. This voltage source needs to exceed 0.6 V for majority carriers (electrons for NPN) to flow from the emitter into the base becoming minority carriers in the P-type semiconductor.
If the base region were thick, as in a pair of back-to-back diodes, all the current entering the base would flow out the base lead. In our NPN transistor example, electrons leaving the emitter for the base would combine with holes in the base, making room for more holes to be created at the (+) battery terminal on the base as electrons exit.
However, the base is manufactured thin. A few majority carriers in the emitter, injected as minority carriers into the base, actually recombine. See Figure below(b). Few electrons injected by the emitter into the base of an NPN transistor fall into holes. Also, few electrons entering the base flow directly through the base to the positive battery terminal. Most of the emitter current of electrons diffuses through the thin base into the collector. Moreover, modulating the small base current produces a larger change in collector current. If the base voltage falls below approximately 0.6 V for a silicon transistor, the large emitter-collector current ceases to flow.
NPN junction bipolar transistor with reverse biased collector-base: (a) Adding forward bias to base-emitter junction, results in (b) a small base current and large emitter and collector currents.
In Figure below we take a closer look at the current amplification mechanism. We have an enlarged view of an NPN junction transistor with emphasis on the thin base region. Though not shown, we assume that external voltage sources 1) forward bias the emitter-base junction, 2) reverse bias the base-collector junction. Electrons, majority carriers, enter the emitter from the (-) battery terminal. The base current flow corresponds to electrons leaving the base terminal for the (+) battery terminal. This is but a small current compared to the emitter current.
Disposition of electrons entering base: (a) Lost due to recombination with base holes. (b) Flows out base lead. (c) Most diffuse from emitter through thin base into base-collector depletion region, and (d) are rapidly swept by the strong depletion region electric field into the collector.
Majority carriers within the N-type emitter are electrons, becoming minority carriers when entering the P-type base. These electrons face four possible fates entering the thin P-type base. A few at Figure above(a) fall into holes in the base that contribute to base current flow to the (+) battery terminal. Not shown, holes in the base may diffuse into the emitter and combine with electrons, contributing to base terminal current. Few at (b) flow on through the base to the (+) battery terminal as if the base were a resistor. Both (a) and (b) contribute to the very small base current flow. Base current is typically 1% of emitter or collector current for small signal transistors. Most of the emitter electrons diffuse right through the thin base (c) into the base-collector depletion region. Note the polarity of the depletion region surrounding the electron at (d). The strong electric field sweeps the electron rapidly into the collector. The strength of the field is proportional to the collector battery voltage. Thus 99% of the emitter current flows into the collector. It is controlled by the base current, which is 1% of the emitter current. This is a potential current gain of 99, the ratio of IC/IB , also known as beta, β.
This magic, the diffusion of 99% of the emitter carriers through the base, is only possible if the base is very thin. What would be the fate of the base minority carriers in a base 100 times thicker? One would expect the recombination rate, electrons falling into holes, to be much higher. Perhaps 99%, instead of 1%, would fall into holes, never getting to the collector. The second point to make is that the base current may control 99% of the emitter current, only if 99% of the emitter current diffuses into the collector. If it all flows out the base, no control is possible.
Another feature accounting for passing 99% of the electrons from emitter to collector is that real bipolar junction transistors use a small heavily doped emitter. The high concentration of emitter electrons forces many electrons to diffuse into the base. The lower doping concentration in the base means fewer holes diffuse into the emitter, which would increase the base current. Diffusion of carriers from emitter to base is strongly favored.
The thin base and the heavily doped emitter help keep the emitter efficiency high, 99% for example. This corresponds to 100% emitter current splitting between the base as 1% and the collector as 99%. The emitter efficiency is known as α = IC/IE.
Bipolar junction transistors are available as PNP as well as NPN devices. We present a comparison of these two in Figure below. The difference is the polarity of the base emitter diode junctions, as signified by the direction of the schematic symbol emitter arrow. It points in the same direction as the anode arrow for a junction diode, against electron current flow. See diode junction, Figure previous. The point of the arrow and bar correspond to P-type and N-type semiconductors, respectively. For NPN and PNP emitters, the arrow points away and toward the base respectively. There is no schematic arrow on the collector. However, the base-collector junction is the same polarity as the base-emitter junction compared to a diode. Note, we speak of diode, not power supply, polarity.
Compare NPN transistor at (a) with the PNP transistor at (b). Note direction of emitter arrow and supply polarity.
The voltage sources for PNP transistors are reversed compared with an NPN transistors as shown in Figure above. The base-emitter junction must be forward biased in both cases. The base on a PNP transistor is biased negative (b) compared with positive (a) for an NPN. In both cases the base-collector junction is reverse biased. The PNP collector power supply is negative compared with positive for an NPN transistor.
Bipolar junction transistor: (a) discrete device cross-section, (b) schematic symbol, (c) integrated circuit cross-section.
Note that the BJT in Figure above(a) has heavy doping in the emitter as indicated by the N+ notation. The base has a normal P-dopant level. The base is much thinner than the not-to-scale cross-section shows. The collector is lightly doped as indicated by the N- notation. The collector needs to be lightly doped so that the collector-base junction will have a high breakdown voltage. This translates into a high allowable collector power supply voltage. Small signal silicon transistors have a 60-80 V breakdown voltage. Though, it may run to hundreds of volts for high voltage transistors. The collector also needs to be heavily doped to minimize ohmic losses if the transistor must handle high current. These contradicting requirements are met by doping the collector more heavily at the metallic contact area. The collector near the base is lightly doped as compared with the emitter. The heavy doping in the emitter gives the emitter-base a low approximate 7 V breakdown voltage in small signal transistors. The heavily doped emitter makes the emitter-base junction have zener diode like characteristics in reverse bias.
The BJT die, a piece of a sliced and diced semiconductor wafer, is mounted collector down to a metal case for power transistors. That is, the metal case is electrically connected to the collector. A small signal die may be encapsulated in epoxy. In power transistors, aluminum bonding wires connect the base and emitter to package leads. Small signal transistor dies may be mounted directly to the lead wires. Multiple transistors may be fabricated on a single die called an integrated circuit. Even the collector may be bonded out to a lead instead of the case. The integrated circuit may contain internal wiring of the transistors and other integrated components. The integrated BJT shown in (Figure (c) above) is much thinner than the “not to scale” drawing. The P+ region isolates multiple transistors in a single die. An aluminum metallization layer (not shown) interconnects multiple transistors and other components. The emitter region is heavily doped, N+ compared to the base and collector to improve emitter efficiency.
Discrete PNP transistors are almost as high quality as the NPN counterpart. However, integrated PNP transistors are not nearly a good as the NPN variety within the same integrated circuit die. Thus, integrated circuits use the NPN variety as much as possible.
Review
• Bipolar transistors conduct current using both electrons and holes in the same device.
• Operation of a bipolar transistor as a current amplifier requires that the collector-base junction be reverse biased and the emitter-base junction be forward biased.
• A transistor differs from a pair of back to back diodes in that the base, the center layer, is very thin. This allows majority carriers from the emitter to diffuse as minority carriers through the base into the depletion region of the base-collector junction, where the strong electric field collects them.
• Emitter efficiency is improved by heavier doping compared with the collector. Emitter efficiency: α = IC/IE, 0.99 for small signal devices
• Current gain is β=IC/IB, 100 to 300 for small signal transistors. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.08%3A_Bipolar_Junction_Transistors.txt |
A field effect transistor (FET) is a unipolar device, conducting a current using only one kind of charge carrier. If based on an N-type slab of semiconductor, the carriers are electrons. Conversely, a P-type based device uses only holes.
At the circuit level, field effect transistor operation is simple. A voltage applied to the gate, input element, controls the resistance of the channel, the unipolar region between the gate regions. (Figure below) In an N-channel device, this is a lightly doped N-type slab of silicon with terminals at the ends. The source and drainterminals are analogous to the emitter and collector, respectively, of a BJT. In an N-channel device, a heavy P-type region on both sides of the center of the slab serves as a control electrode, the gate. The gate is analogous to the base of a BJT.
“Cleanliness is next to godliness” applies to the manufacture of field effect transistors. Though it is possible to make bipolar transistors outside of a clean room, it is a necessity for field effect transistors. Even in such an environment, manufacture is tricky because of contamination control issues. The unipolar field effect transistor is conceptually simple, but difficult to manufacture. Most transistors today are a metal oxide semiconductor variety (later section) of the field effect transistor contained within integrated circuits. However, discrete JFET devices are available.
Junction field effect transistor cross-section.
A properly biased N-channel junction field effect transistor (JFET) is shown in Figure above. The gate constitutes a diode junction to the source to drain semiconductor slab. The gate is reverse biased. If a voltage (or an ohmmeter) were applied between the source and drain, the N-type bar would conduct in either direction because of the doping. Neither gate nor gate bias is required for conduction. If a gate junction is formed as shown, conduction can be controlled by the degree of reverse bias.
Figure below(a) shows the depletion region at the gate junction. This is due to diffusion of holes from the P-type gate region into the N-type channel, giving the charge separation about the junction, with a non-conductive depletion region at the junction. The depletion region extends more deeply into the channel side due to the heavy gate doping and light channel doping.
N-channel JFET: (a) Depletion at gate diode. (b) Reverse biased gate diode increases depletion region. (c) Increasing reverse bias enlarges depletion region. (d) Increasing reverse bias pinches-off the S-D channel.
The thickness of the depletion region can be increased Figure above(b) by applying moderate reverse bias. This increases the resistance of the source to drain channel by narrowing the channel. Increasing the reverse bias at (c) increases the depletion region, decreases the channel width, and increases the channel resistance. Increasing the reverse bias VGS at (d) will pinch-off the channel current. The channel resistance will be very high. This VGS at which pinch-off occurs is VP, the pinch-off voltage. It is typically a few volts. In summation, the channel resistance can be controlled by the degree of reverse biasing on the gate.
The source and drain are interchangeable, and the source to drain current may flow in either direction for low level drain battery voltage (< 0.6 V). That is, the drain battery may be replaced by a low voltage AC source. For a high drain power supply voltage, to 10’s of volts for small signal devices, the polarity must be as indicated in Figure below(a). This drain power supply, not shown in previous figures, distorts the depletion region, enlarging it on the drain side of the gate. This is a more correct representation for common DC drain supply voltages, from a few to tens of volts. As drain voltage VDS increased,the gate depletion region expands toward the drain. This increases the length of the narrow channel, increasing its resistance a little. We say “a little” because large resistance changes are due to changing gate bias. Figure below(b) shows the schematic symbol for an N-channel field effect transistor compared to the silicon cross-section at (a). The gate arrow points in the same direction as a junction diode. The “pointing” arrow and “non-pointing” bar correspond to P and N-type semiconductors, respectively.
N-channel JFET electron current flow from source to drain in (a) cross-section, (b) schematic symbol.
Figure above shows a large electron current flow from (-) battery terminal, to FET source, out the drain, returning to the (+) battery terminal. This current flow may be controlled by varying the gate voltage. A load in series with the battery sees an amplified version of the changing gate voltage.
P-channel field effect transistors are also available. The channel is made of P-type material. The gate is a heavily dopped N-type region. All the voltage sources are reversed in the P-channel circuit (Figure below) as compared with the more popular N-channel device. Also note, the arrow points out of the gate of the schematic symbol (b) of the P-channel field effect transistor.
P-channel JFET: (a) N-type gate, P-type channel, reversed voltage sources compared with N-channel device. (b) Note reversed gate arrow and voltage sources on schematic.
As the positive gate bias voltage is increased, the resistance of the P-channel increases, decreasing the current flow in the drain circuit.
Discrete devices are manufactured with the cross-section shown in Figure below. The cross-section, oriented so that it corresponds to the schematic symbol, is upside down with respect to a semiconductor wafer. That is, the gate connections are on the top of the wafer. The gate is heavily doped, P+, to diffuse holes well into the channel for a large depletion region. The source and drain connections in this N-channel device are heavily doped, N+ to lower connection resistance. However, the channel surrounding the gate is lightly doped to allow holes from the gate to diffuse deeply into the channel. That is the N- region.
Junction field effect transistor: (a) Discrete device cross-section, (b) schematic symbol, (c) integrated circuit device cross-section.
All three FET terminals are available on the top of the die for the integrated circuit version so that a metalization layer (not shown) can interconnect multiple components. (Figure above(c) ) Integrated circuit FET’s are used in analog circuits for the high gate input resistance.. The N-channel region under the gate must be very thin so that the intrinsic region about the gate can control and pinch-off the channel. Thus, gate regions on both sides of the channel are not necessary.
Junction field effect transistor (static induction type): (a) Cross-section, (b) schematic symbol.
The static induction field effect transistor (SIT) is a short channel device with a buried gate. (Figure above) It is a power device, as opposed to a small signal device. The low gate resistance and low gate to source capacitance make for a fast switching device. The SIT is capable of hundreds of amps and thousands of volts. And, is said to be capable of an incredible frequency of 10 gHz.
Metal semiconductor field effect transistor (MESFET): (a) schematic symbol, (b) cross-section.
The Metal semiconductor field effect transistor (MESFET) is similar to a JFET except the gate is a schottky diode instead of a junction diode. A schottky diode is a metal rectifying contact to a semiconductor compared with a more common ohmic contact. In Figure above the source and drain are heavily doped (N+). The channel is lightly doped (N-). MESFET’s are higher speed than JFET’s. The MESET is a depletion mode device, normally on, like a JFET. They are used as microwave power amplifiers to 30 gHz. MESFET’s can be fabricated from silicon, gallium arsenide, indium phosphide, silicon carbide, and the diamond allotrope of carbon.
Review
• The unipolar junction field effect transistor (FET or JFET) is so called because conduction in the channel is due to one type of carrier
• The JFET source, gate, and drain correspond to the BJT’s emitter, base, and collector, respectively.
• Application of reverse bias to the gate varies the channel resistance by expanding the gate diode depletion region. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.09%3A_Junction_Field-effect_Transistors.txt |
The MOSFET has source, gate, and drain terminals like the FET. However, the gate lead does not make a direct connection to the silicon compared with the case for the FET. The MOSFET gate is a metallic or polysilicon layer atop a silicon dioxide insulator. The gate bears a resemblance to a metal oxide semiconductor (MOS) capacitor in Figure below. When charged, the plates of the capacitor take on the charge polarity of the respective battery terminals. The lower plate is P-type silicon from which electrons are repelled by the negative (-) battery terminal toward the oxide, and attracted by the positive (+) top plate. This excess of electrons near the oxide creates an inverted (excess of electrons) channel under the oxide. This channel is also accompanied by a depletion region isolating the channel from the bulk silicon substrate.
N-channel MOS capacitor: (a) no charge, (b) charged.
In Figure below (a) the MOS capacitor is placed between a pair of N-type diffusions in a P-type substrate. With no charge on the capacitor, no bias on the gate, the N-type diffusions, the source and drain, remain electrically isolated.
N-channel MOSFET (enhancement type): (a) 0 V gate bias, (b) positive gate bias.
A positive bias applied to the gate charges the capacitor (the gate). The gate atop the oxide takes on a positive charge from the gate bias battery. The P-type substrate below the gate takes on a negative charge. An inversion region with an excess of electrons forms below the gate oxide. This region now connects the source and drain N-type regions, forming a continuous N-region from source to drain. Thus, the MOSFET, like the FET is a unipolar device. One type of charge carrier is responsible for conduction. This example is an N-channel MOSFET. Conduction of a large current from source to drain is possible with a voltage applied between these connections. A practical circuit would have a load in series with the drain battery in Figure above (b).
The MOSFET described above in Figure above is known as an enhancement mode MOSFET. The non-conducting, off, channel is turned on by enhancing the channel below the gate by application of a bias. This is the most common kind of device. The other kind of MOSFET will not be described here. See the Insulated-gate field-effect transistor chapter for the depletion mode device.
The MOSFET, like the FET, is a voltage controlled device. A voltage input to the gate controls the flow of current from source to drain. The gate does not draw a continuous current. Though, the gate draws a surge of current to charge the gate capacitance.
The cross-section of an N-channel discrete MOSFET is shown in Figure below (a). Discrete devices are usually optimized for high power switching. The N+ indicates that the source and drain are heavily N-type doped. This minimizes resistive losses in the high current path from source to drain. The N- indicates light doping. The P-region under the gate, between source and drain, can be inverted by application of a positive bias voltage. The doping profile is a cross-section, which may be laid out in a serpentine pattern on the silicon die. This greatly increases the area, and consequently, the current handling ability.
N-channel MOSFET (enhancement type): (a) Cross-section, (b) schematic symbol.
The MOSFET schematic symbol in Figure above (b) shows a “floating” gate, indicating no direct connection to the silicon substrate. The broken line from source to drain indicates that this device is off, not conducting, with zero bias on the gate. A normally “off” MOSFET is an enhancement mode device. The channel must be enhanced by application of a bias to the gate for conduction. The “pointing” end of the substrate arrow corresponds to P-type material, which points toward an N-type channel, the “non-pointing” end. This is the symbol for an N-channel MOSFET. The arrow points in the opposite direction for a P-channel device (not shown). MOSFET’s are four terminal devices: source, gate, drain, and substrate. The substrate is connected to the source in discrete MOSFET’s, making the packaged part a three terminal device. MOSFET’s, that are part of an integrated circuit, have the substrate common to all devices, unless purposely isolated. This common connection may be bonded out of the die for connection to a ground or power supply bias voltage.
N-channel “V-MOS” transistor: (a) Cross-section, (b) schematic symbol.
The V-MOS device in (Figure above) is an improved power MOSFET with the doping profile arranged for lower on-state source to drain resistance. VMOS takes its name from the V-shaped gate region, which increases the cross-sectional area of the source-drain path. This minimizes losses and allows switching of higher levels of power. UMOS, a variation using a U-shaped groove, is more reproducible in manufacture.
Review
• MOSFET’s are unipolar conduction devices, conduction with one type of charge carrier, like a FET, but unlike a BJT.
• A MOSFET is a voltage controlled device like a FET. A gate voltage input controls the source to drain current.
• The MOSFET gate draws no continuous current, except leakage. However, a considerable initial surge of current is required to charge the gate capacitance. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.10%3A_Insulated-gate_Field-effect_Transistors_%28MOSFET%29.txt |
Shockley proposed the four layer diode thyristor in 1950. It was not realized until years later at General Electric. SCR’s are now available to handle power levels spanning watts to megawatts. The smallest devices, packaged like small-signal transistors, switch 100’s of milliamps at near 100 VAC. The largest packaged devices are 172 mm in diameter, switching 5600 Amps at 10,000 VAC. The highest power SCR’s may consist of a whole semiconductor wafer several inches in diameter (100’s of mm).
Silicon controlled rectifier (SCR): (a) doping profile, (b) BJT equivalent circuit.
The silicon controlled rectifier is a four layer diode with a gate connection as in Figure above (a). When turned on, it conducts like a diode, for one polarity of current. If not triggered on, it is nonconducting. Operation is explained in terms of the compound connected transistor equivalent in Figure above (b). A positive trigger signal is applied between the gate and cathode terminals. This causes the NPN equivalent transistor to conduct. The collector of the conducting NPN transistor pulls low, moving the PNP base towards its collector voltage, which causes the PNP to conduct. The collector of the conducting PNP pulls high, moving the NPN base in the direction of its collector. This positive feedback (regeneration) reinforces the NPN’s already conducting state. Moreover, the NPN will now conduct even in the absence of a gate signal. Once an SCR conducts, it continues for as long as a positive anode voltage is present. For the DC battery shown, this is forever. However, SCR’s are most often used with an alternating current or pulsating DC supply. Conduction ceases with the expiration of the positive half of the sinewave at the anode. Moreover, most practical SCR circuits depend on the AC cycle going to zero to cutoff or commutate the SCR.
Figure below (a) shows the doping profile of an SCR. Note that the cathode, which corresponds to an equivalent emitter of an NPN transistor is heavily doped as N+ indicates. The anode is also heavily doped (P+). It is the equivalent emitter of a PNP transistor. The two middle layers, corresponding to base and collector regions of the equivalent transistors, are less heavily doped: N- and P. This profile in high power SCR’s may be spread across a whole semiconductor wafer of substantial diameter.
Thyristors: (a) Cross-section, (b) silicon controlled rectifier (SCR) symbol, (c) gate turn-off thyristor (GTO) symbol.
The schematic symbols for an SCR and GTO are shown in Figures above (b & c). The basic diode symbol indicates that cathode to anode conduction is unidirectional like a diode. The addition of a gate lead indicates control of diode conduction. The gate turn off switch (GTO) has bidirectional arrows about the gate lead, indicating that the conduction can be disabled by a negative pulse, as well as initiated by a positive pulse.
In addition to the ubiquitous silicon based SCR’s, experimental silicon carbide devices have been produced. Silicon carbide (SiC) operates at higher temperatures, and is more conductive of heat than any metal, second to diamond. This should allow for either physically smaller or higher power capable devices.
Review
• SCR’s are the most prevalent member of the thyristor four layer diode family.
• A positive pulse applied to the gate of an SCR triggers it into conduction. Conduction continues even if the gate pulse is removed. Conduction only ceases when the anode to cathode voltage drops to zero.
• SCR’s are most often used with an AC supply (or pulsating DC) because of the continuous conduction.
• A gate turn off switch (GTO) may be turned off by application of a negative pulse to the gate.
• SCR’s switch megawatts of power, up to 5600 A and 10,000 V. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.11%3A_Thyristors.txt |
Silicon is the second most common element in the Earth’s crust in the form of silicon dioxide, \(\ce{SiO2}\), otherwise known as silica sand. Silicon is freed from silicon dioxide by reduction with carbon in an electric arc furnace.
\[\ce{SIO2 + C -> CO2 + Si}\]
Such metallurgical grade silicon is suitable for use in silicon steel transformer laminations, but not nearly pure enough for semiconductor applications. Conversion to the chloride \(\ce{SiCl4}\) (or \(\ce{SiHCl3}\)) allows purification by fractional distillation. Reduction by ultrapure zinc or magnesium yields sponge silicon, requiring further purification. Or, thermal decomposition on a hot polycrystalline silicon rod heater by hydrogen yields ultra pure silicon.
\[\ce{Si + 3HCl -> SiHCl3 + H2}\]
\[\ce{SiHCl3 + H2 -> Si + 3HCl2}\]
The polycrystalline silicon is melted in a fused silica crucible heated by an induction heated graphite susceptor. The graphite heater may alternatively be directly driven by a low voltage at high current. In the Czochralski process, the silicon melt is solidified on to a pencil sized monocrystal silicon rod of the desired crystal lattice orientation. (Figure below) The rod is rotated and pulled upward at a rate to encourage the diameter to expand to several inches. Once this diameter is attained, the boule is automatically pulled at a rate to maintain a constant diameter to a length of a few feet. Dopants may be added to the crucible melt to create, for example, a P-type semiconductor. The growing apparatus is enclosed within an inert atmosphere.
Czochralski monocrystalline silicon growth.
The finished boule is ground to a precise final diameter, and the ends trimmed. The boule is sliced into wafers by an inside diameter diamond saw. The wafers are ground flat and polished. The wafers could have an N-type epitaxial layer grown atop the wafer by thermal deposition for higher quality. Wafers at this stage of manufacture are delivered by the silicon wafer manufacturer to the semiconductor manufacturer.
Silicon boule is diamond sawed into wafers.
The processing of semiconductors involves photo lithography, a process for making metal lithographic printing plates by acid etching. The electronics based version of this is the processing of copper printed circuit boards. This is reviewed in Figure below as an easy introduction to the photo lithography involved in semiconductor processing.
Processing of copper printed circuit boards is similar to the photo lithographic steps of semiconductor processing.
We start with a copper foil laminated to an epoxy fiberglass board in Figure above (a). We also need positive artwork with black lines corresponding to the copper wiring lines and pads that are to remain on the finished board. Positive artwork is required because positive acting resist is used. Though, negative resist is available for both circuit boards and semiconductor processing. At (b) the liquid positive photo resist is applied to the copper face of the printed circuit board (PCB). It is allowed to dry and may be baked in an oven. The artwork may be a plastic film positive reproduction of the original artwork scaled to the required size. The artwork is placed in contact with the circuit board under a glass plate at (c). The board is exposed to ultraviolet light (d) to form a latent image of softened photo resist. The artwork is removed (e) and the softened resist washed away by an alkaline solution (f). The rinsed and dried (baked) circuit board has a hardened resist image atop the copper lines and pads that are to remain after etching. The board is immersed in the etchant (g) to remove copper not protected by hardened resist. The etched board is rinsed and the resist removed by a solvent.
The major difference in the patterning of semiconductors is that a silicon dioxide layer atop the wafer takes the place of the resist during the high-temperature processing steps. Though, the resist is required in low-temperature wet processing to pattern the silicon dioxide.
An N-type doped silicon wafer in Figure below (a) is the starting material in the manufacture of semiconductor junctions. A silicon dioxide layer (b) is grown atop the wafer in the presence of oxygen or water vapor at high temperature (over 1000o C in a diffusion furnace. A pool of resist is applied to the center of the cooled wafer, then spun in a vacuum chuck to evenly distribute the resist. The baked on resist (c) has a chrome on glass mask applied to the wafer at (d). This mask contains a pattern of windows which is exposed to ultraviolet light (e).
Manufacture of a silicon diode junction.
After the mask is removed in Figure above (f), the positive resist can be developed (g) in an alkaline solution, opening windows in the UV softened resist. The purpose of the resist is to protect the silicon dioxide from the hydrofluoric acid etch (h), leaving only open windows corresponding to the mask openings. The remaining resist (i) is stripped from the wafer before returning to the diffusion furnace. The wafer is exposed to a gaseous P-type dopant at high temperature in a diffusion furnace (j). The dopant only diffuses into the silicon through the openings in the silicon dioxide layer. Each P-diffusion through an opening produces a PN junction. If diodes were the desired product, the wafer would be diamond scribed and broken into individual diode chips. However, the whole wafer may be processed further into bipolar junction transistors.
To convert the diodes into transistors, a small N-type diffusion in the middle of the existing P-region is required. Repeating the previous steps with a mask having smaller openings accomplishes this. Though not shown in Figure above (j), an oxide layer was probably formed in that step during the P-diffusion. The oxide layer over the P-diffusion is shown in Figure below (k). Positive photo resist is applied and dried (l). The chrome on glass emitter mask is applied (m), and UV exposed (n). The mask is removed (o). The UV softened resist in the emitter opening is removed with an alkaline solution (p). The exposed silicon dioxide is etched away with hydrofluoric acid (HF) at (q)
Manufacture of a bipolar junction transistor, continuation of Manufacture of a silicon diode junction.
After the unexposed resist is stripped from the wafer (r), it is placed in a diffusion furnace (Figure above (s) for high-temperature processing. An N-type gaseous dopant, such phosphorus oxychloride (POCl) diffuses through the small emitter window in the oxide (s). This creates NPN layers corresponding to the emitter, base, and collector of a BJT. It is important that the N-type emitter not is driven all the way through the P-type base, shorting the emitter and collector. The base region between the emitter and collector also needs to be thin so that the transistor has a useful β. Otherwise, a thick base region could form a pair of diodes rather than a transistor. At (t) metallization is shown making contact with the transistor regions. This requires a repeat of the previous steps (not shown here) with a mask for contact openings through the oxide. Another repeat with another mask defines the metallization pattern atop the oxide and contacting the transistor regions through the openings.
The metalization could connect numerous transistors and other components into an integrated circuit. Though, only one transistor is shown. The finished wafer is diamond scribed and broken into individual dies for packaging. Fine gauge aluminum wire bonds the metallized contacts on the die to a lead frame, which brings the contacts out of the final package.
Review
• Most semiconductors are based on ultra pure silicon because it forms a glass oxide atop the wafer. This oxide can be patterned with photo lithography, making complex integrated circuits possible.
• Sausage shaped single crystals of silicon are grown by the Czochralski process, These are diamond sawed into wafers.
• The patterning of silicon wafers by photo lithography is similar to patterning copper printed circuit boards. Photo resist is applied to the wafer, which is exposed to UV light through a mask. The resist is developed, then the wafer is etched.
• hydrofluoric acid etching opens windows in the protective silicon dioxide atop the wafer.
• Exposure to gaseous dopants at high temperature produces semiconductor junctions as defined by the openings in the silicon dioxide layer.
• The photo lithography is repeated for more diffusions, contacts, and metalization.
• The metalization may interconnect multiple components into an integrated circuit. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.12%3A_Semiconductor_Manufacturing_Techniques.txt |
Superconductivity: Heike Onnes discovered superconductivity in mercury (Hg) in 1911, for which he won a Nobel prize. Most metals decrease electrical resistance with decreasing temperature. Though, most do not decrease to zero resistance as 0 Kelvin is approached. Mercury is unique in that its resistance abruptly drops to zero Ω at 4.2 K. Superconductors lose all resistance abruptly when cooled below their critical temperature, Tc A property of superconductivity is no power loss in conductors. Current may flow in a loop of superconducting wire for thousands of years. Super conductors include lead (Pb), aluminum, (Al), tin (Sn) and niobium (Nb).
Cooper pair: Lossless conduction in superconductors is not by ordinary electron flow. Electron flow in normal conductors encounters opposition as collisions with the rigid ionic metal crystal lattice. Decreasing vibrations of the crystal lattice with decreasing temperature accounts for decreasing resistance– up to a point. Lattice vibrations cease at absolute zero, but not the energy dissipating collisions of electrons with the lattice. Thus, normal conductors do not lose all resistance at absolute zero.
Electrons in superconductors form a pair of electrons called a cooper pair, as temperature drops below the critical temperature at which superconductivity begins. The cooper pair exists because it is at a lower energy level than unpaired electrons. The electrons are attracted to each other due to the exchange of phonons, very low energy particles related to vibrations. This cooper pair, quantum mechanical entity (particle or wave) is not subject to the normal laws of physics. This entity propagates through the lattice without encountering the metal ions comprising the fixed lattice. Thus, it dissipates no energy. The quantum mechanical nature of the cooper pair only allows it to exchange discrete amounts of energy, not continuously variable amounts. An absolute minimum quantum of energy is acceptable to the cooper pair. If the vibrational energy of the crystal lattice is less, (due to the low temperature), the cooper pair cannot accept it, cannot be scattered by the lattice. Thus, under the critical temperature, the cooper pairs flow unimpeded through the lattice
Josephson junctions: Brian Josephson won a Nobel prize for his 1962 prediction of the Josepheson junction. A Josephson junction is a pair of superconductors bridged by a thin insulator, as in Figure below(a), through which electrons can tunnel. The first Josephson junctions were lead superconductors bridged by an insulator. These days a tri-layer of aluminum and niobium is preferred. Electrons can tunnel through the insulator even with zero voltage applied across the superconductors.
If a voltage is applied across the junction, the current decreases and oscillates at a high frequency proportional to voltage. The relationship between applied voltage and frequency is so precise that the standard volt is now defined in terms of Josephson junction oscillation frequency. The Josephson junction can also serve as a hyper-sensitive detector of low level magnetic fields. It is also very sensitive to electromagnetic radiation from microwaves to gamma rays.
(a) Josephson junction, (b) Josephson transistor.
Josephson transistor: An electrode close to the oxide of the Josephson junction can influence the junction by capacitive coupling. Such an assembly in Figure above (b) is a Josephson transistor. A major feature of the Josephson transistor is low power dissipation applicable to high-density circuitry, for example, computers. This transistor is generally part of a more complex superconducting device like a SQUID or RSFQ.
SQUID: A Superconducting quantum interference device or SQUID is an assembly of Josephson junctions within a superconducting ring. Only the DC SQUID is considered in this discussion. This device is highly sensitive to low level magnetic fields.
A constant current bias is forced across the ring in parallel with both Josephson junctions in Figure below. The current divides equally between the two junctions in the absence of an applied magnetic field and no voltage is developed across across the ring. [JBc] While any value of Magnetic flux (Φ) may be applied to the SQUID, only a quantized value (a multiple of the flux quanta) can flow through the opening in the superconducting ring.If the applied flux is not an exact multiple of the flux quanta, the excess flux is cancelled by a circulating current around the ring which produces a fractional flux quanta. The circulating current will flow in that direction which cancels any excess flux above a multiple of the flux quanta. It may either add to or subtract from the applied flux, up to ±(1/2) a flux quanta. If the circulating current flows clockwise, the current adds to the top Josepheson junction and subtracts from the lower one. Changing applied flux linearly causes the circulating current to vary as a sinusoid.This can be measured as a voltage across the SQUID. As the applied magnetic field is increased, a voltage pulse may be counted for each increase by a flux quanta.
Superconduction quantum interference device (SQUID): Josephson junction pair within a superconducting ring. A change in flux produces a voltage variation across the JJ pair.
A SQUID is said to be sensitive to 10-14 Tesla, It can detect the magnetic field of neural currents in the brain at 10-13 Tesla. Compare this with the 30 x 10-6 Tesla strength of the Earth’s magnetic field.
Rapid single flux quantum (RSFQ): Rather than mimic silicon semiconductor circuits, RSFQ circuits rely upon new concepts: magnetic flux quantization within a superconductor and movement of the flux quanta produces a picosecond quantized voltage pulse. Magnetic flux can only exist within a section of superconductor quantized in discrete multiples. The lowest flux quanta allowed is employed. The pulses are switched by Josephson junctions instead of conventional transistors. The superconductors are based on a triple layer of aluminum and niobium with a critical temperature of 9.5 K, cooled to 5 K.
RSQF’s operate at over 100 gHz with very little power dissipation. Manufacture is simple with existing photolithographic techniques. Though, operation requires refrigeration down to 5 K . Real world commercial applications include analog-to-digital and digital to analog converters, toggle flip-flops, shift registers, memory, adders, and multipliers.
High temperature superconductors: High temperature superconductors are compounds exhibiting superconductivity above the liquid nitrogen boiling point of 77 K. This is significant because liquid nitrogen is readily available and inexpensive. Most conventional superconductors are metals; widely used high temperature superconductors are cuprates, mixed oxides of copper (Cu), for example YBa2Cu3O7-x, critical temperature, Tc = 90 K . A list of others is available.Most of the devices described in this section are being developed in high-temperature superconductor versions for less critical applications. Though they do not have the performance of the conventional metal superconductor devices, the liquid nitrogen cooling is more available.
Review
• Most metals decrease resistance as they approach absolute 0; though, the resistance does not drop to 0. Superconductors experience a rapid drop to zero resistance at their critical temperature on being cooled. Typically Tc is within 10 K of absolute zero.
• A Cooper pair, electron pair, a quantum mechanical entity, moves unimpeded through the metal crystal lattice.
• Electrons are able to tunnel through a Josephson junction, an insulating gap across a pair of superconductors.
• The addition of a third electrode, or gate, near the junction constitutes a Josephson transistor.
• A SQUID, Superconduction quantum interference device, is a highly sensitive detector of magnetic fields. It counts quantum units of a magnetic field within a superconducting ring.
• RSFQ, Rapid single flux quantum is a high-speed switching device based on switching the magnetic quanta existing withing a superconducting loop.
• High-temperature superconductors, Tc above liquid nitrogen boiling point, may also be used to build the superconducting devices in this section. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.13%3A_Superconducting_Devices.txt |
Improved photolithography will have to be applied to other than the conventional transistors, dimensions (under 20- to 30-nm). The objectional MOS leakage currents are due to quantum mechanical effects–electron tunneling through gate oxide, and the narrow channel. In summary, quantum mechanical effects are a hindrance to ever smaller conventional MOS transistors. The path to ever smaller geometry devices involves unique active devices which make practical use of quantum mechanical principles. As physical geometry becomes very small, electrons may be treated as the quantum mechanical equivalent: a wave. Devices making use of quantum mechanical principles include resonant tunneling diodes, quantum tunneling transistors, metal insulator metal diodes, and quantum dot transistors.
Quantum tunneling: is the passing of electrons through an insulating barrier which is thin compared to the de Broglie (here) electron wavelength. If the “electron wave” is large compared to the barrier, there is a possibility that the wave appears on both sides of the barrier.
Classical view of an electron surmounting a barrier, or not. Quantum mechanical view allows an electron to tunnel through a barrier. The probability (green) is related to the barrier thickness. After Figure 1
In classical physics, an electron must have sufficient energy to surmount a barrier. Otherwise, it recoils from the barrier. (Figure above) Quantum mechanics allows for a probability of the electron being on the other side of the barrier. If treated as a wave, the electron may look quite large compared to the thickness of the barrier. Even when treated as a wave, there is only a small probability that it will be found on the other side of a thick barrier. See green portion of curve, Figure above. Thinning the barrier increases the probability that the electron is found on the other side of the barrier.
Tunnel diode: The unqualified term tunnel diode refers to the esaki tunnel diode, an early quantum device. A reverse biased diode forms a depletion region, an insulating region, between the conductive anode and cathode. This depletion region is only thin as compared to the electron wavelength when heavily doped– 1000 times the doping of a rectifier diode. With proper biasing, quantum tunneling is possible. See CH 3 for details.
RTD, resonant tunneling diode: This is a quantum device not to be confused with the Esaki tunnel diode, CH 3 , a conventional heavily doped bipolar semiconductor. Electrons tunnel through two barriers separated by a well in flowing source to drain in a resonant tunneling diode. Tunneling is also known as quantum mechanical tunneling. The flow of electrons is controlled by diode bias. This matches the energy levels of the electrons in the source to the quantized level in the well so that electrons can tunnel through the barriers. The energy level in the well is quantized because the well is small. When the energy levels are equal, a resonance occurs, allowing electron flow through the barriers as shown in Figure below (b). No bias or too much bias, in Figures below (a) and (c) respectively, yields an energy mismatch between the source and the well, and no conduction.
Resonant tunneling diode (RTD): (a) No bias, source and well energy levels not matched, no conduction. (b) Small bias causes matched energy levels (resonance); conduction results. (c) Further bias mismatches energy levels, decreasing conduction.
As bias is increased from zero across the RTD, the current increases and then decreases, corresponding to off, on, and off states. This makes simplification of conventional transistor circuits possible by substituting a pair of RTD’s for two transistors. For example, two back-to-back RTD’s and a transistor form a memory cell, using fewer components, less area and power compared with a conventional circuit. The potential application of RTD’s is to reduce the component count, area, and power dissipation of conventional transistor circuits by replacing some, though not all, transistors. RTD’s have been shown to oscillate up to 712 gHz.
Double-layer tunneling transistor: The Deltt, otherwise known as the Double-layer tunneling transistor is constructed of a pair of conductive wells separated by an insulator or high band gap semiconductor. (Figure below) The wells are so thin that electrons are confined to two dimensions. These are known as quantum wells. A pair of these quantum wells are insulated by a thin GaAlAs, high band gap (does not easily conduct) layer. Electrons can tunnel through the insulating layer if the electrons in the two quantum wells have the same momentum and energy. The wells are so thin that the electron may be treated as a wave– the quantum mechanical duality of particles and waves. The top and optional bottom control gates may be adjusted to equalize the energy levels (resonance) of the electrons to allow conduction from source to drain. Figure below, barrier diagram red bars show unequal energy levels in the wells, an “off-state” condition. Proper biasing of the gates equalizes the energy levels of electrons in the wells, the “on-state” condition. The bars would be at the same level in the energy level diagram.
Double-layer tunneling transistor (Deltt) is composed of two electron containing wells separated by a nonconducting barrier. The gate voltages may be adjusted so that the energy and momentum of the electrons in the wells are equal which permits electrons to tunnel through the nonconductive barrier. (The energy levels are shown as unequal in the barrier diagram.)
If gate bias is increased beyond that required for tunneling, the energy levels in the quantum wells no longer match, tunneling is inhibited, source to drain current decreases. To summarize, increasing gate bias from zero results in on, off, on conditions. This allows a pair of Deltt’s to be stacked in the manner of a CMOS complementary pair; though, different p- and n-type transistors are not required. Power supply voltage is about 100 mV. Experimental Deltt’s have been produced which operate near 4.2 K, 77 K, and 0o C. Room temperature versions are expected.
MIIM diode: The metal-insulator-insulator-metal (MIIM) diode is a quantum tunneling device, not based on semiconductors. See “MIIM diode section” Figure below. The insulator layers must be thin compared to the de Broglie (here) electron wavelength, for quantum tunneling to be possible. For diode action, there must be a prefered tunneling direction, resulting in a sharp bend in the diode forward characteristic curve. The MIIM diode has a sharper forward curve than the metal insulator metal (MIM) diode, not considered here.
Metal insulator insulator metal (MIIM) diode: Cross section of diode. Energy levels for no bias, forward bias, and reverse bias. After Figure 1
The energy levels of M1 and M2 are equal in “no bias” Figure above. However, (thermal) electrons cannot flow due to the high I1 and I2 barriers. Electrons in metal M2 have a higher energy level in “reverse bias” Figure above, but still cannot overcome the insulator barrier. As “forward bias” Figure above is increased, a quantum well, an area where electrons may exist, is formed between the insulators. Electrons may pass through insulator I1 if M1 is based at the same energy level as the quantum well. A simple explanation is that the distance through the insulators is shorter. A longer explanation is that as bias increases, the probability of the electron wave overlapping from M1 to the quantum well increases. For a more detailed explanation see Phiar Corp.
MIIM devices operate at higher frequencies (3.7 THz) than microwave transistors. The addition of a third electrode to a MIIM diode produces a transistor.
Quantum dot transistor: An isolated conductor may take on a charge, measured in coulombs for large objects. For a nano-scale isolated conductor known as a quantum dot, the charge is measured in electrons. A quantum dot of 1- to 3-nm may take on an incremental charge of a single electron. This is the basis of the quantum dot transistor, also known as a single electron transistor.
A quantum dot placed atop a thin insulator over an electron rich source is known as a single electron box. (Figure below (a)) The energy required to transfer an electron is related to the size of the dot and the number of electrons already on the dot.
A gate electrode above the quantum dot can adjust the energy level of the dot so that quantum mechanical tunneling of an electron (as a wave) from the source through the insulator is possible. (Figure below (b)) Thus, a single electron may tunnel to the dot.
(a) Single electron box, an isolated quantum dot separated from an electron source by an insulator. (b) Positive charge on the gate polarizes quantum dot, tunneling an electron from the source to the dot. (c) Quantum transistor: channel is replaced by quantum dot surrounded by tunneling barrier.
If the quantum dot is surrounded by a tunnel barrier and embedded between the source and drain of a conventional FET, as in Figure above (c) , the charge on the dot can modulate the flow of electrons from source to drain. As gate voltage increases, the source to drain current increases, up to a point. A further increase in gate voltage decreases drain current. This is similar to the behavior of the RTD and Deltt resonant devices. Only one kind of transistor is required to build a complementary logic gate.
Single electron transistor: If a pair of conductors, superconductors, or semiconductors are separated by a pair of tunnel barriers (insulator), surrounding a tiny conductive island, like a quantum dot, the flow of a single charge (a Cooper pair for superconductors) may be controlled by a gate. This is a single electron transistor similar to Figure above (c). Increasing the positive charge on the gate, allows an electron to tunnel to the island. If it is sufficiently small, the low capacitance will cause the dot potential to rise substantially due to the single electron. No more electrons can tunnel to the island due the electron charge. This is known at the coulomb blockade. The electron which tunneled to the island, can tunnel to the drain.
Single electron transistors operate at near absolute zero. The exception is the graphene single electron transistor, having a graphene island. They are all experimental devices.
Graphene transistor: Graphite, an allotrope of carbon, does not have the rigid interlocking crystalline structure of diamond. None the less, it has a crystalline structure– one atom thick, a so called two-dimensional structure. A graphite is a three-dimensional crystal. However, it cleaves into thin sheets. Experimenters, taking this to the extreme, produce micron sized specks as thin as a single atom known as graphene. (Figure below (a)) These membranes have unique electronic properties. Highly conductive, conduction is by either electrons or holes, without doping of any kind.
Graphene sheets may be cut into transistor structures by lithographic techniques. The transistors bear some resemblance to a MOSFET. A gate capacitively coupled to a graphene channel controls conduction.
As silicon transistors scale to smaller sizes, leakage increases along with power dissipation. And they get smaller every couple of years. Graphene transistors dissipate little power. And, they switch at high speed. Graphene might be a replacement for silicon someday.
Graphene can be fashioned into devices as small as sixty atoms wide. Graphene quantum dots within a transistor this small serve as single electron transistors. Previous single electron transistors fashioned from either superconductors or conventional semiconductors operate near absolute zero. Graphene single electron transistors uniquely function at room temperature.
Graphene transistors are laboratory curiosities at this time. If they are to go into production two decades from now, graphene wafers must be produced. The first step, production of graphene by chemical vapor deposition (CVD) has been accomplished on an experimental scale. Though, no wafers are available to date.
(a) Graphene: A single sheet of the graphite allotrope of carbon. The atoms are arranged in a hexagonal pattern with a carbon at each intersection. (b) Carbon nanotube: A rolled-up sheet of graphene.
Carbon nanotube transistor: If a 2-D sheet of graphene is rolled, the resulting 1-D structure is known as a carbon nanotube. (Figure above (b)) The reason to treat it as 1-dimensional is that it is highly conductive. Electrons traverse the carbon nanotube without being scattered by a crystal lattice. Resistance in normal metals is caused by scattering of electrons by the metallic crystalline lattice. If electrons avoid this scattering, conduction is said to be by ballistic transport. Both metallic (acting) and semiconducting carbon nanotubes have been produced.
Field effect transistors may be fashioned from a carbon nanotubes by depositing source and drain contacts on the ends, and capacitively coupling a gate to the nanotube between the contacts. Both p- and n-type transistors have been fabricated. Why the interest in carbon nanotube transistors? Nanotube semiconductors are Smaller, faster, lower power compared with silicon transistors.
Spintronics: Conventional semiconductors control the flow of electron charge, current. Digital states are represented by “on” or “off” flow of current. As semiconductors become more dense with the move to smaller geometry, the power that must be dissipated as heat increases to the point that it is difficult to remove. Electrons have properties other than charge such as spin. A tentative explanation of electron spin is the rotation of distributed electron charge about the spin axis, analogous to diurnal rotation of the Earth. The loops of current created by charge movement, form a magnetic field. However, the electron is more like a point charge than a distributed charge, Thus, the rotating distributed charge analogy is not a correct explanation of spin. Electron spin may have one of two states: up or down which may represent digital states. More precisely the spin (ms) quantum number may be ±1/2 the angular momentum (l) quantum number.
Controlling electron spin instead of charge flow considerably reduces power dissipation and increases switching speed. Spintronics, an acronym for SPIN TRansport electrONICS, is not widely applied because of the difficulty of generating, controlling, and sensing electron spin. However, high density, non-volatile magnetic spin memory is in production using modified semiconductor processes. This is related to the spin valve magnetic read head used in computer harddisk drives, not mentioned further here.
A simple magnetic tunnel junction (MTJ) is shown in Figure below (a), consisting of a pair of ferromagnetic, strong magnetic properties like iron (Fe), layers separated by a thin insulator. Electrons can tunnel through a sufficiently thin insulator due to the quantum mechanical properties of electrons– the wave nature of electrons. The current flow through the MTJ is a function of the magnetization, spin polarity, of the ferromagnetic layers. The resistance of the MTJ is low if the magnetic spin of the top layer is in the same direction (polarity) as the bottom layer. If the magnetic spins of the two layers oppose, the resistance is higher.
(a) Magnetic tunnel junction (MTJ): Pair of ferromagnetic layers separated by a thin insulator. The resistance varies with the magnetization polarity of the top layer (b) Antiferromagnetic bias magnet and pinned bottom ferromagnetic layer increases resistance sensitivity to changes in polarity of the top ferromagnetic layer. Adapted from Figure 3.
The change in resistance can be enhanced by the addition of an antiferromagnet, material having spins aligned but opposing, below the bottom layer in Figure above (b). This bias magnet pins the lower ferromagnetic layer spin to a single unchanging polarity. The top layer magnetization (spin) may be flipped to represent data by the application of an external magnetic field not shown in the figure. The pinned layer is not affected by external magnetic fields. Again, the MTJ resistance is lowest when the spin of the top ferromagnetic layer is the same sense as the bottom pinned ferromagnetic layer.
The MTJ may be improved further by splitting the pinned ferromagnetic layer into two layers separated by a buffer layer in Figure below (a). This isolates the top layer. The bottom ferromagnetic layer is pinned by the antiferromagnet as in the previous figure. The ferromagnetic layer atop the buffer is attracted by the bottom ferromagnetic layer. Opposites attract. Thus, the spin polarity of the additional layer is opposite of that in the bottom layer due to attraction. The bottom and middle ferromagnetic layers remain fixed. The top ferromagnetic layer may be set to either spin polarity by high currents in proximate conductors (not shown). This is how data are stored. Data are read out by the difference in current flow through the tunnel junction. Resistance is lowest if the layers on both sides of the insulting layer are of the same spin.
(a)Splitting the pinned ferromagnetic layer of (b) by a buffer layer improves stability and isolates the top ferromagnetic unpinned layer. Data are stored in the top ferromagnetic layer based on spin polarity (b) MTJ cell embedded in read lines of a semiconductor die– one of many MTJ’s. Adapted from [IBM]
An array of magnetic tunnel junctions may be embedded in a silicon wafer with conductors connecting the top and bottom terminals for reading data bits from the MTJ’s with conventional CMOS circuitry. One such MTJ is shown in Figure above (b) with the read conductors. Not shown, another crossed array of conductors carrying heavy write currents switch the magnetic spin of the top ferromagnetic layer to store data. A current is applied to one of many “X” conductors and a “Y” conductor. One MTJ in the array is magnetized under the conductors’ crossover. Data are read out by sensing the MTJ current with conventional silicon semiconductor circuitry. [IBM]
The main reason for interest in magnetic tunnel junction memory is that it is nonvolatile. It does not lose data when powered “off”. Other types of nonvolatile memory are capable of only limited storage cycles. MTJ memory is also higher speed than most semiconductor memory types. It is now (2006) a commercial product. [TLE]
Not a commercial product, or even a laboratory device, is the theoretical spin transistor which might one day make spin logic gates possible. The spin transistor is a derivative of the theoretical spin diode.
It has been known for some time that electrons flowing through a cobalt-iron ferromagnet become spin polarized. The ferromagnet acts as a filter passing electrons of one spin preferentially. These electrons may flow into an adjacent nonmagnetic conductor (or semiconductor) retaining the spin polarization for a short time, nano-seconds. Though, spin polarized electrons may propagate a considerable distance compared with semiconductor dimensions. The spin polarized electrons may be detected by a nickel-iron ferromagnetic layer adjacent to the semiconductor.
It has also been shown that electron spin polarization occurs when circularly polarized light illuminates some semiconductor materials. Thus, it should be possible to inject spin polarized electrons into a semiconductor diode or transistor. The interest in spin based transistors and gates is because of the non-dissipative nature of spin propagation, compared with dissipative charge flow. As conventional semiconductors are scaled down in size, power dissipation increases. At some point the scaling down will no longer be practical. Researchers are looking for a replacement for the conventional charge flow based transistor. That device may be based on spintronics. [RCJ]
Review
• As MOS gate oxide thins with each generation of smaller transistors, excessive gate leakage causes unacceptable power dissipation and heating. The limit of scaling down conventional semiconductor geometry is within sight.
• Resonant tunneling diode (RTD): Quantum mechanical effects, which degrade conventional semiconductors, are employed in the RTD. The flow of electrons through a sufficiently thin insulator, is by the wave nature of the electron– particle wave duality. The RTD functions as an amplifier.
• Double layer tunneling transistor (Deltt): The Deltt is a transistor version of the RTD. Gate bias controls the ability of electrons to tunnel through a thin insulator from one quantum well to another (source to drain).
• Quantum dot transistor: A quantum dot, capable of holding a charge, is surrounded by a thin tunnel barrier replacing the gate of a conventional FET. The charge on the quantum dot controls source to drain current flow.
• Spintronics: Electrons have two basic properties: charge and spin. Conventional electronic devices control the flow of charge, dissipating energy. Spintronic devices manipulate electron spin, a propagative, non-dissipative process. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.14%3A_Quantum_Devices.txt |
Diode: The diode statement begins with a diode element name which must begin with “d” plus optional characters. Some example diode element names include: d1, d2, dtest, da, db, d101, etc. Two node numbers specify the connection of the anode and cathode, respectively, to other components. The node numbers are followed by a model name, referring to a “.model” statement.
The model statement line begins with “.model”, followed by the model name matching one or more diode statements. Next is a “d” indicating that a diode is being modeled. The remainder of the model statement is a list of optional diode parameters of the form ParameterName=ParameterValue. None are shown in the example below. For a list, see reference, “diodes”. [TRK]
Models for specific diode part numbers are often furnished by the semiconductor diode manufacturer. These models include parameters. Otherwise, the parameters default to so called “default values”, as in the example.
BJT, bipolar junction transistor: The BJT element statement begins with an element name which must begin with “q” with associated circuit symbol designator characters, example: q1, q2, qa, qgood. The BJT node numbers (connections) identify the wiring of the collector, base, emitter respectively. A model name following the node numbers is associated with a model statement.
The model statement begins with “.model”, followed by the model name, followed by one of “npn” or “pnp”. The optional list of parameters follows, and may continue for a few lines beginning with line continuation symbol “+”, plus. Shown above is the forward β parameter set to 75 for the hypothetical q2n090 model. Detailed transistor models are often available from semiconductor manufacturers.
FET, field effect transistor The field effect transistor element statement begins with an element name beginning with “j” for JFET associated with some unique characters, example: j101, j2b, jalpha, etc. The node numbers follow for the drain, gate and source terminals, respectively. The node numbers define connectivity to other circuit components. Finally, a model name indicates the JFET model to use.
The “.model” in the JFET model statement is followed by the model name to identify this model to the JFET element statement(s) using it. Following the model name is either pjf or njf for p-channel or n-channel JFET’s respectively. A long list of JFET parameters may follow. We only show how to set Vp, pinch off voltage, to -4.0 V for an n-channel JFET model. Otherwise, this vto parameter defaults to -2.5 V or 2.5V for n-channel or p-channel devices, respectively.
MOSFET, metal oxide field effect transistor The MOSFET element name must begin with “m”, and is the first word in the element statement. Following are the four node numbers for the drain, gate, source, and substrate, respectively. Next is the model name. Note that the source and substrate are both connected to the same node “0” in the example. Discrete MOSFET’s are packaged as three terminal devices, the source and substrate are the same physical terminal. Integrated MOSFET’s are four terminal devices; the substrate is a fourth terminal. Integrated MOSFET’s may have numerous devices sharing the same substrate, separate from the sources. Though, the sources might still be connected to the common substrate.
The MOSFET model statement begins with “.model” followed by the model name followed by either “pmos” or “nmos”. Optional MOSFET model parameters follow. The list of possible parameters is long. See Volume 5, “MOSFET” for details. [TRK] MOSFET manufacturers provide detailed models. Otherwise, defaults are in effect.
The bare minimum semiconductor SPICE information is provided in this section. The models shown here allow simulation of basic circuits. In particular, these models do not account for high speed or high-frequency operation. Simulations are shown in the Volume 5 Chapter 7, “Using SPICE ...”.
Review
• Semiconductors may be computer simulated with SPICE.
• SPICE provides element statements and models for the diode, BJT, JFET, and MOSFET. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/02%3A_Solid-state_Device_Theory/2.15%3A_Semiconductor_Devices_in_SPICE.txt |
• 3.1: Introduction to Diodes And Rectifiers
• 3.2: Meter Check of a Diode
• 3.3: Diode Ratings
In addition to forward voltage drop (Vf) and peak inverse voltage (PIV), there are many other ratings of diodes important to circuit design and component selection. Semiconductor manufacturers provide detailed specifications on their products—diodes included—in publications known as datasheets. Datasheets for a wide variety of semiconductor components may be found in reference books and on the internet. I prefer the internet as a source of component specifications because all the data obtained f
• 3.4: Rectifier Circuits
• 3.5: Peak Detector
• 3.6: Clipper Circuits
• 3.7: Clamper Circuits
The circuits in the figure below are known as clampers or DC restorers. The corresponding netlist is also in the figure below. These circuits clamp a peak of a waveform to a specific DC level compared with a capacitively coupled signal which swings about its average DC level (usually 0V). If the diode is removed from the clamper, it defaults to a simple coupling capacitor– no clamping.
• 3.8: Voltage Multipliers (Doublers, Triplers, Quadruplers, and More)
A voltage multiplier is a specialized rectifier circuit producing an output which is theoretically an integer times the AC peak input, for example, 2, 3, or 4 times the AC peak input. Thus, it is possible to get 200 VDC from a 100 Vpeak AC source using a doubler, 400 VDC from a quadrupler. Any load in a practical circuit will lower these voltages.
• 3.9: Inductor Commutating Circuits
• 3.10: Diode Switching Circuits
Diodes can perform switching and digital logic operations. Forward and reverse bias switch a diode between the low and high impedance states, respectively. Thus, it serves as a switch.
• 3.11: What Are Zener Diodes?
A Zener diode is a special type of rectifying diode that can handle breakdown due to reverse breakdown voltage without failing completely. Here we will discuss the concept of using diodes to regulate voltage drop and how the Zener diode operates in reverse-bias mode to regulate voltage in a circuit.
• 3.12: Special-purpose Diodes
• 3.13: Other Diode Technologies
• 3.14: SPICE Models
03: Diodes and Rectifiers
All About Diode
A diode is an electrical device allowing current to move through it in one direction with far greater ease than in the other. The most common kind of diode in modern circuit design is the semiconductor diode, although other diode technologies exist. Semiconductor diodes are symbolized in schematic diagrams such as Figure below. The term “diode” is customarily reserved for small signal devices, I ≤ 1 A. The term rectifier is used for power devices, I > 1 A.
Semiconductor diode schematic symbol: Arrows indicate the direction of electron current flow.
When placed in a simple battery-lamp circuit, the diode will either allow or prevent current through the lamp, depending on the polarity of the applied voltage. (Figure below)
Diode operation: (a) Current flow is permitted; the diode is forward biased. (b) Current flow is prohibited; the diode is reversed biased.
When the polarity of the battery is such that electrons are allowed to flow through the diode, the diode is said to be forward-biased. Conversely, when the battery is “backward” and the diode blocks current, the diode is said to be reverse-biased. A diode may be thought of as like a switch: “closed” when forward-biased and “open” when reverse-biased.
Oddly enough, the direction of the diode symbol’s “arrowhead” points against the direction of electron flow. This is because the diode symbol was invented by engineers, who predominantly use conventional flow notation in their schematics, showing current as a flow of charge from the positive (+) side of the voltage source to the negative (-). This convention holds true for all semiconductor symbols possessing “arrowheads:” the arrow points in the permitted direction of conventional flow, and against the permitted direction of electron flow.
Hydraulic Check Valve Analogy
Diode behavior is analogous to the behavior of a hydraulic device called a check valve. A check valve allows fluid flow through it in only one direction as in Figure below.
Hydraulic check valve analogy: (a) Electron current flow permitted. (b) Current flow prohibited.
Check valves are essentially pressure-operated devices: they open and allow flow if the pressure across them is of the correct “polarity” to open the gate (in the analogy shown, greater fluid pressure on the right than on the left). If the pressure is of the opposite “polarity,” the pressure difference across the check valve will close and hold the gate so that no flow occurs.
Like check valves, diodes are essentially “pressure-” operated (voltage-operated) devices. The essential difference between forward-bias and reverse-bias is the polarity of the voltage dropped across the diode. Let’s take a closer look at the simple battery-diode-lamp circuit shown earlier, this time investigating voltage drops across the various components in Figure below.
Diode circuit voltage measurements: (a) Forward biased. (b) Reverse biased.
A forward-biased diode conducts current and drops a small voltage across it, leaving most of the battery voltage dropped across the lamp. If the battery’s polarity is reversed, the diode becomes reverse-biased, and drops all of the battery’s voltage leaving none for the lamp. If we consider the diode to be a self-actuating switch (closed in the forward-bias mode and open in the reverse-bias mode), this behavior makes sense. The most substantial difference is that the diode drops a lot more voltage when conducting than the average mechanical switch (0.7 volts versus tens of millivolts).
This forward-bias voltage drop exhibited by the diode is due to the action of the depletion region formed by the P-N junction under the influence of an applied voltage. If no voltage applied is across a semiconductor diode, a thin depletion region exists around the region of the P-N junction, preventing current flow. (Figure below (a)) The depletion region is almost devoid of available charge carriers, and acts as an insulator:
Diode representations: PN-junction model, schematic symbol, physical part.
The schematic symbol of the diode is shown in Figure above (b) such that the anode (pointing end) corresponds to the P-type semiconductor at (a). The cathode bar, non-pointing end, at (b) corresponds to the N-type material at (a). Also note that the cathode stripe on the physical part (c) corresponds to the cathode on the symbol.
If a reverse-biasing voltage is applied across the P-N junction, this depletion region expands, further resisting any current through it. (Figure below)
Depletion region expands with reverse bias.
Conversely, if a forward-biasing voltage is applied across the P-N junction, the depletion region collapses becoming thinner. The diode becomes less resistive to current through it. In order for a sustained current to go through the diode; though, the depletion region must be fully collapsed by the applied voltage. This takes a certain minimum voltage to accomplish, called the forward voltage as illustrated in Figure below.
Inceasing forward bias from (a) to (b) decreases depletion region thickness.
For silicon diodes, the typical forward voltage is 0.7 volts, nominal. For germanium diodes, the forward voltage is only 0.3 volts. The chemical constituency of the P-N junction comprising the diode accounts for its nominal forward voltage figure, which is why silicon and germanium diodes have such different forward voltages. Forward voltage drop remains approximately constant for a wide range of diode currents, meaning that diode voltage drop is not like that of a resistor or even a normal (closed) switch. For most simplified circuit analysis, the voltage drop across a conducting diode may be considered constant at the nominal figure and not related to the amount of current.
Diode Equation
Actually, forward voltage drop is more complex. An equation describes the exact current through a diode, given the voltage dropped across the junction, the temperature of the junction, and several physical constants. It is commonly known as the diode equation:
The term kT/q describes the voltage produced within the P-N junction due to the action of temperature, and is called the thermal voltage, or Vt of the junction. At room temperature, this is about 26 millivolts. Knowing this, and assuming a “nonideality” coefficient of 1, we may simplify the diode equation and re-write it as such:
You need not be familiar with the “diode equation” to analyze simple diode circuits. Just understand that the voltage dropped across a current-conducting diode does change with the amount of current going through it, but that this change is fairly small over a wide range of currents. This is why many textbooks simply say the voltage drop across a conducting, semiconductor diode remains constant at 0.7 volts for silicon and 0.3 volts for germanium. However, some circuits intentionally make use of the P-N junction’s inherent exponential current/voltage relationship and thus can only be understood in the context of this equation. Also, since temperature is a factor in the diode equation, a forward-biased P-N junction may also be used as a temperature-sensing device, and thus can only be understood if one has a conceptual grasp on this mathematical relationship.
A reverse-biased diode prevents current from going through it, due to the expanded depletion region. In actuality, a very small amount of current can and does go through a reverse-biased diode, called the leakage current, but it can be ignored for most purposes. The ability of a diode to withstand reverse-bias voltages is limited, as it is for any insulator. If the applied reverse-bias voltage becomes too great, the diode will experience a condition known as breakdown (Figure below), which is usually destructive. A diode’s maximum reverse-bias voltage rating is known as the Peak Inverse Voltage, or PIV, and may be obtained from the manufacturer. Like forward voltage, the PIV rating of a diode varies with temperature, except that PIV increases with increased temperature and decreases as the diode becomes cooler—exactly opposite that of forward voltage.
Diode curve: showing knee at 0.7 V forward bias for Si, and reverse breakdown.
Typically, the PIV rating of a generic “rectifier” diode is at least 50 volts at room temperature. Diodes with PIV ratings in the many thousands of volts are available for modest prices.
Review
• A diode is an electrical component acting as a one-way valve for current.
• When voltage is applied across a diode in such a way that the diode allows current, the diode is said to be forward-biased.
• When voltage is applied across a diode in such a way that the diode prohibits current, the diode is said to be reverse-biased.
• The voltage dropped across a conducting, forward-biased diode is called the forward voltage. Forward voltage for a diode varies only slightly for changes in forward current and temperature, and is fixed by the chemical composition of the P-N junction.
• Silicon diodes have a forward voltage of approximately 0.7 volts.
• Germanium diodes have a forward voltage of approximately 0.3 volts.
• The maximum reverse-bias voltage that a diode can withstand without “breaking down” is called the Peak Inverse Voltage, or PIV rating. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/03%3A_Diodes_and_Rectifiers/3.01%3A_Introduction_to_Diodes_And_Rectifiers.txt |
The Functionality of Diode Polarity
Being able to determine the polarity (cathode versus anode) and basic functionality of a diode is a very important skill for the electronics hobbyist or technician to have. Since we know that a diode is essentially nothing more than a one-way valve for electricity, it makes sense we should be able to verify its one-way nature using a DC (battery-powered) ohmmeter as in Figure below. Connected one way across the diode, the meter should show a very low resistance at (a). Connected the other way across the diode, it should show a very high resistance at (b) (“OL” on some digital meter models).
Determination of diode polarity: (a) Low resistance indicates forward bias, black lead is cathode and red lead anode (for most meters) (b) Reversing leads shows high resistance indicating reverse bias.
How to Determine Diode?
Of course, to determine which end of the diode is the cathode and which is the anode, you must know with certainty which test lead of the meter is positive (+) and which is negative (-) when set to the “resistance” or “Ω” function. With most digital multimeters I’ve seen, the red lead becomes positive and the black lead negative when set to measure resistance, in accordance with standard electronics color-code convention. However, this is not guaranteed for all meters. Many analog multimeters, for example, actually make their black leads positive (+) and their red leads negative (-) when switched to the “resistance” function because it is easier to manufacture it that way!
One problem with using an ohmmeter to check a diode is that the readings obtained only have qualitative value, not quantitative. In other words, an ohmmeter only tells you which way the diode conducts; the low-value resistance indication obtained while conducting is useless. If an ohmmeter shows a value of “1.73 ohms” while forward-biasing a diode, that figure of 1.73 Ω doesn’t represent any real-world quantity useful to us as technicians or circuit designers. It neither represents the forward voltage drop nor any “bulk” resistance in the semiconductor material of the diode itself, but rather is a figure dependent upon both quantities and will vary substantially with the particular ohmmeter used to take the reading.
For this reason, some digital multimeter manufacturers equip their meters with a special “diode check” function which displays the actual forward voltage drop of the diode in volts, rather than a “resistance” figure in ohms. These meters work by forcing a small current through the diode and measuring the voltage dropped between the two test leads. (Figure below)
Meter with a “Diode check” function displays the forward voltage drop of 0.548 volts instead of a low resistance.
The forward voltage reading obtained with such a meter will typically be less than the “normal” drop of 0.7 volts for silicon and 0.3 volts for germanium because the current provided by the meter is of trivial proportions. If a multimeter with diode-check function isn’t available, or you would like to measure a diode’s forward voltage drop at some non-trivial current, the circuit of Figure below may be constructed using a battery, resistor, and voltmeter
Measuring forward voltage of a diode without“diode check” meter function: (a) Schematic diagram. (b) Pictorial diagram.
Connecting the diode backwards to this testing circuit will simply result in the voltmeter indicating the full voltage of the battery.
If this circuit were designed to provide a constant or nearly constant current through the diode despite changes in forward voltage drop, it could be used as the basis of a temperature-measurement instrument, the voltage measured across the diode is inversely proportional to diode junction temperature. Of course, diode current should be kept to a minimum to avoid self-heating (the diode dissipating substantial amounts of heat energy), which would interfere with temperature measurement.
Beware that some digital multimeters equipped with a “diode check” function may output a very low test voltage (less than 0.3 volts) when set to the regular “resistance” (Ω) function: too low to fully collapse the depletion region of a PN junction. The philosophy here is that the “diode check” function is to be used for testing semiconductor devices, and the “resistance” function for anything else. By using a very low test voltage to measure resistance, it is easier for a technician to measure the resistance of non-semiconductor components connected to semiconductor components since the semiconductor component junctions will not become forward-biased with such low voltages.
Consider the example of a resistor and diode connected in parallel, soldered in place on a printed circuit board (PCB). Normally, one would have to unsolder the resistor from the circuit (disconnect it from all other components) before measuring its resistance, otherwise, any parallel-connected components would affect the reading obtained. When using a multimeter which outputs a very low test voltage to the probes in the “resistance” function mode, the diode’s PN junction will not have enough voltage impressed across it to become forward-biased, and will only pass negligible current. Consequently, the meter “sees” the diode as an open (no continuity), and only registers the resistor’s resistance. (Figure below)
Ohmmeter equipped with a low test voltage (<0.7 V) does not see diodes allowing it to measure parallel resistors.
If such an ohmmeter were used to test a diode, it would indicate a very high resistance (many mega-ohms) even if connected to the diode in the “correct” (forward-biased) direction. (Figure below)
Ohmmeter equipped with a low test voltage, too low to forward bias diodes, does not see diodes.
Reverse voltage strength of a diode is not as easily tested because exceeding a normal diode’s PIV usually results in destruction of the diode. Special types of diodes, though, which are designed to “break down” in reverse-bias mode without damage (called zener diodes), which are tested with the same voltage source / resistor / voltmeter circuit, provided that the voltage source is of high enough value to force the diode into its breakdown region. More on this subject in a later section of this chapter.
Review
• An ohmmeter may be used to qualitatively check diode function. There should be low resistance measured one way and very high resistance measured the other way. When using an ohmmeter for this purpose, be sure you know which test lead is positive and which is negative! The actual polarity may not follow the colors of the leads as you might expect, depending on the particular design of meter.
• Some multimeters provide a “diode check” function that displays the actual forward voltage of the diode when its conducting current. Such meters typically indicate a slightly lower forward voltage than what is “nominal” for a diode, due to the very small amount of current used during the check. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/03%3A_Diodes_and_Rectifiers/3.02%3A_Meter_Check_of_a_Diode.txt |
A typical diode datasheet will contain figures for the following parameters:
Maximum repetitive reverse voltage = VRRM, the maximum amount of voltage the diode can withstand in reverse-bias mode, in repeated pulses. Ideally, this figure would be infinite.
Maximum DC reverse voltage = VR or VDC, the maximum amount of voltage the diode can withstand in reverse-bias mode on a continual basis. Ideally, this figure would be infinite.
Maximum forward voltage = VF, usually specified at the diode’s rated forward current. Ideally, this figure would be zero: the diode providing no opposition whatsoever to forward current. In reality, the forward voltage is described by the “diode equation.”
Maximum (average) forward current = IF(AV), the maximum average amount of current the diode is able to conduct in forward bias mode. This is fundamentally a thermal limitation: how much heat can the PN junction handle, given that dissipation power is equal to current (I) multiplied by voltage (V or E) and forward voltage is dependent upon both current and junction temperature. Ideally, this figure would be infinite.
Maximum (peak or surge) forward current = IFSM or if(surge), the maximum peak amount of current the diode is able to conduct in forward bias mode. Again, this rating is limited by the diode junction’s thermal capacity, and is usually much higher than the average current rating due to thermal inertia (the fact that it takes a finite amount of time for the diode to reach maximum temperature for a given current). Ideally, this figure would be infinite.
Maximum total dissipation = PD, the amount of power (in watts) allowable for the diode to dissipate, given the dissipation (P=IE) of diode current multiplied by diode voltage drop, and also the dissipation (P=I2R) of diode current squared multiplied by bulk resistance. Fundamentally limited by the diode’s thermal capacity (ability to tolerate high temperatures).
Operating junction temperature = TJ, the maximum allowable temperature for the diode’s PN junction, usually given in degrees Celsius (oC). Heat is the “Achilles’ heel” of semiconductor devices: they must be kept cool to function properly and give long service life.
Storage temperature range = TSTG, the range of allowable temperatures for storing a diode (unpowered). Sometimes given in conjunction with operating junction temperature (TJ), because the maximum storage temperature and the maximum operating temperature ratings are often identical. If anything, though, maximum storage temperature rating will be greater than the maximum operating temperature rating.
Thermal resistance = R(Θ), the temperature difference between junction and outside air (R(Θ)JA) or between junction and leads (R(Θ)JL) for a given power dissipation. Expressed in units of degrees Celsius per watt (oC/W). Ideally, this figure would be zero, meaning that the diode package was a perfect thermal conductor and radiator, able to transfer all heat energy from the junction to the outside air (or to the leads) with no difference in temperature across the thickness of the diode package. A high thermal resistance means that the diode will build up excessive temperature at the junction (where its critical) despite best efforts at cooling the outside of the diode, and thus will limit its maximum power dissipation.
Maximum reverse current = IR, the amount of current through the diode in reverse-bias operation, with the maximum rated inverse voltage applied (VDC). Sometimes referred to as leakage current. Ideally, this figure would be zero, as a perfect diode would block all current when reverse-biased. In reality, it is very small compared to the maximum forward current.
Typical junction capacitance = CJ, the typical amount of capacitance intrinsic to the junction, due to the depletion region acting as a dielectric separating the anode and cathode connections. This is usually a very small figure, measured in the range of picofarads (pF).
Reverse recovery time = trr, the amount of time it takes for a diode to “turn off” when the voltage across it alternates from forward-bias to reverse-bias polarity. Ideally, this figure would be zero: the diode halting conduction immediately upon polarity reversal. For a typical rectifier diode, reverse recovery time is in the range of tens of microseconds; for a “fast switching” diode, it may only be a few nanoseconds.
Most of these parameters vary with temperature or other operating conditions, and so a single figure fails to fully describe any given rating. Therefore, manufacturers provide graphs of component ratings plotted against other variables (such as temperature), so that the circuit designer has a better idea of what the device is capable of. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/03%3A_Diodes_and_Rectifiers/3.03%3A_Diode_Ratings.txt |
What is Rectification?
Now we come to the most popular application of the diode: rectification. Simply defined, rectification is the conversion of alternating current (AC) to direct current (DC). This involves a device that only allows one-way flow of electrons. As we have seen, this is exactly what a semiconductor diode does. The simplest kind of rectifier circuit is the half-wave rectifier. It only allows one half of an AC waveform to pass through to the load. (Figure below)
Half-wave rectifier circuit.
Half-Wave Rectification
For most power applications, half-wave rectification is insufficient for the task. The harmonic content of the rectifier’s output waveform is very large and consequently difficult to filter. Furthermore, the AC power source only supplies power to the load one half every full cycle, meaning that half of its capacity is unused. Half-wave rectification is, however, a very simple way to reduce power to a resistive load. Some two-position lamp dimmer switches apply full AC power to the lamp filament for “full” brightness and then half-wave rectify it for a lesser light output. (Figure below)
Half-wave rectifier application: Two level lamp dimmer.
In the “Dim” switch position, the incandescent lamp receives approximately one-half the power it would normally receive operating on full-wave AC. Because the half-wave rectified power pulses far more rapidly than the filament has time to heat up and cool down, the lamp does not blink. Instead, its filament merely operates at a lesser temperature than normal, providing less light output. This principle of “pulsing” power rapidly to a slow-responding load device to control the electrical power sent to it is common in the world of industrial electronics. Since the controlling device (the diode, in this case) is either fully conducting or fully nonconducting at any given time, it dissipates little heat energy while controlling load power, making this method of power control very energy-efficient. This circuit is perhaps the crudest possible method of pulsing power to a load, but it suffices as a proof-of-concept application.
Full-Wave Rectifiers
If we need to rectify AC power to obtain the full use of both half-cycles of the sine wave, a different rectifier circuit configuration must be used. Such a circuit is called a full-wave rectifier. One kind of full-wave rectifier, called the center-tap design, uses a transformer with a center-tapped secondary winding and two diodes, as in Figure below.
Full-wave rectifier, center-tapped design.
This circuit’s operation is easily understood one half-cycle at a time. Consider the first half-cycle, when the source voltage polarity is positive (+) on top and negative (-) on bottom. At this time, only the top diode is conducting; the bottom diode is blocking current, and the load “sees” the first half of the sine wave, positive on top and negative on bottom. Only the top half of the transformer’s secondary winding carries current during this half-cycle as in Figure below.
Full-wave center-tap rectifier: Top half of secondary winding conducts during positive half-cycle of input, delivering positive half-cycle to load..
During the next half-cycle, the AC polarity reverses. Now, the other diode and the other half of the transformer’s secondary winding carry current while the portions of the circuit formerly carrying current during the last half-cycle sit idle. The load still “sees” half of a sine wave, of the same polarity as before: positive on top and negative on bottom. (Figure below)
Full-wave center-tap rectifier: During negative input half-cycle, bottom half of secondary winding conducts, delivering a positive half-cycle to the load.
One disadvantage of this full-wave rectifier design is the necessity of a transformer with a center-tapped secondary winding. If the circuit in question is one of high power, the size and expense of a suitable transformer is significant. Consequently, the center-tap rectifier design is only seen in low-power applications.
The full-wave center-tapped rectifier polarity at the load may be reversed by changing the direction of the diodes. Furthermore, the reversed diodes can be paralleled with an existing positive-output rectifier. The result is dual-polarity full-wave center-tapped rectifier in Figure below. Note that the connectivity of the diodes themselves is the same configuration as a bridge.
Dual polarity full-wave center tap rectifier
Full-Wave Bridge Rectifiers
Another, more popular full-wave rectifier design exists, and it is built around a four-diode bridge configuration. For obvious reasons, this design is called a full-wave bridge. (Figure below)
Full-wave bridge rectifier.
Current directions for the full-wave bridge rectifier circuit are as shown in Figure below for positive half-cycle and Figure below for negative half-cycles of the AC source waveform. Note that regardless of the polarity of the input, the current flows in the same direction through the load. That is, the negative half-cycle of source is a positive half-cycle at the load. The current flow is through two diodes in series for both polarities. Thus, two diode drops of the source voltage are lost (0.7·2=1.4 V for Si) in the diodes. This is a disadvantage compared with a full-wave center-tap design. This disadvantage is only a problem in very low voltage power supplies.
Full-wave bridge rectifier: Electron flow for positive half-cycles.
Full-wave bridge rectifier: Electron flow for negative half=cycles.
Remembering the proper layout of diodes in a full-wave bridge rectifier circuit can often be frustrating to the new student of electronics. I’ve found that an alternative representation of this circuit is easier both to remember and to comprehend. It’s the exact same circuit, except all diodes are drawn in a horizontal attitude, all “pointing” the same direction. (Figure below)
Alternative layout style for Full-wave bridge rectifier.
One advantage of remembering this layout for a bridge rectifier circuit is that it expands easily into a polyphase version in Figure below.
Three-phase full-wave bridge rectifier circuit.
Each three-phase line connects between a pair of diodes: one to route power to the positive (+) side of the load, and the other to route power to the negative (-) side of the load. Polyphase systems with more than three phases are easily accommodated into a bridge rectifier scheme. Take for instance the six-phase bridge rectifier circuit in Figure below.
Six-phase full-wave bridge rectifier circuit.
When polyphase AC is rectified, the phase-shifted pulses overlap each other to produce a DC output that is much “smoother” (has less AC content) than that produced by the rectification of single-phase AC. This is a decided advantage in high-power rectifier circuits, where the sheer physical size of filtering components would be prohibitive but low-noise DC power must be obtained. The diagram in Figure below shows the full-wave rectification of three-phase AC.
Three-phase AC and 3-phase full-wave rectifier output.
Ripple Voltage
In any case of rectification—single-phase or polyphase—the amount of AC voltage mixed with the rectifier’s DC output is called ripple voltage. In most cases, since “pure” DC is the desired goal, ripple voltage is undesirable. If the power levels are not too great, filtering networks may be employed to reduce the amount of ripple in the output voltage.
1-Pulse, 2-Pulse, and 6-Pulse Units
Sometimes, the method of rectification is referred to by counting the number of DC “pulses” output for every 360o of electrical “rotation.” A single-phase, half-wave rectifier circuit, then, would be called a 1-pulserectifier, because it produces a single pulse during the time of one complete cycle (360o) of the AC waveform. A single-phase, full-wave rectifier (regardless of design, center-tap or bridge) would be called a 2-pulse rectifier because it outputs two pulses of DC during one AC cycle’s worth of time. A three-phase full-wave rectifier would be called a 6-pulse unit.
Rectifier Circuit Phases
Modern electrical engineering convention further describes the function of a rectifier circuit by using a three-field notation of phases, ways, and number of pulses. A single-phase, half-wave rectifier circuit is given the somewhat cryptic designation of 1Ph1W1P (1 phase, 1 way, 1 pulse), meaning that the AC supply voltage is single-phase, that current on each phase of the AC supply lines moves in only one direction (way), and that there is a single pulse of DC produced for every 360o of electrical rotation. A single-phase, full-wave, center-tap rectifier circuit would be designated as 1Ph1W2P in this notational system: 1 phase, 1 way or direction of current in each winding half, and 2 pulses or output voltage per cycle. A single-phase, full-wave, bridge rectifier would be designated as 1Ph2W2P: the same as for the center-tap design, except current, can go both ways through the AC lines instead of just one way. The three-phase bridge rectifier circuit shown earlier would be called a 3Ph2W6P rectifier.
Is it Possible to Obtain More Pulses Than Twice the Number of Phases in a Rectifier Circuit?
The answer to this question is yes:, especially in polyphase circuits. Through the creative use of transformers, sets of full-wave rectifiers may be paralleled in such a way that more than six pulses of DC are produced for three phases of AC. A 30o phase shift is introduced from primary to secondary of a three-phase transformer when the winding configurations are not of the same type. In other words, a transformer connected either Y-Δ or Δ-Y will exhibit this 30o phase shift, while a transformer connected Y-Y or Δ-Δ will not. This phenomenon may be exploited by having one transformer connected Y-Y feed a bridge rectifier, and have another transformer connected Y-Δ feed a second bridge rectifier, then parallel the DC outputs of both rectifiers. (Figure below) Since the ripple voltage waveforms of the two rectifiers’ outputs are phase-shifted 30o from one another, their superposition results in less ripple than either rectifier output considered separately: 12 pulses per 360o instead of just six:
Polyphase rectifier circuit: 3-phase 2-way 12-pulse (3Ph2W12P)
Review
• Rectification is the conversion of alternating current (AC) to direct current (DC).
• A half-wave rectifier is a circuit that allows only one half-cycle of the AC voltage waveform to be applied to the load, resulting in one non-alternating polarity across it. The resulting DC delivered to the load “pulsates” significantly.
• A full-wave rectifier is a circuit that converts both half-cycles of the AC voltage waveform to an unbroken series of voltage pulses of the same polarity. The resulting DC delivered to the load doesn’t “pulsate” as much.
• Polyphase alternating current, when rectified, gives a much “smoother” DC waveform (less ripple voltage) than rectified single-phase AC. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/03%3A_Diodes_and_Rectifiers/3.04%3A_Rectifier_Circuits.txt |
A peak detector is a series connection of a diode and a capacitor outputting a DC voltage equal to the peak value of the applied AC signal. The circuit is shown in Figure below with the corresponding SPICE net list. An AC voltage source applied to the peak detector, charges the capacitor to the peak of the input. The diode conducts positive “half cycles,” charging the capacitor to the waveform peak. When the input waveform falls below the DC “peak” stored on the capacitor, the diode is reverse biased, blocking current flow from capacitor back to the source. Thus, the capacitor retains the peak value even as the waveform drops to zero. Another view of the peak detector is that it is the same as a half-wave rectifier with a filter capacitor added to the output.
Peak detector: Diode conducts on positive half cycles charging capacitor to the peak voltage (less diode forward drop).
It takes a few cycles for the capacitor to charge to the peak as in Figure below due to the series resistance (RC “time constant”). Why does the capacitor not charge all the way to 5 V? It would charge to 5 V if an “ideal diode” were obtainable. However, the silicon diode has a forward voltage drop of 0.7 V which subtracts from the 5 V peak of the input.
Peak detector: Capacitor charges to peak within a few cycles.
The circuit in Figure above could represent a DC power supply based on a half-wave rectifier. The resistance would be a few Ohms instead of 1 kΩ due to a transformer secondary winding replacing the voltage source and resistor. A larger “filter” capacitor would be used. A power supply based on a 60 Hz source with a filter of a few hundred µF could supply up to 100 mA. Half-wave supplies seldom supply more due to the difficulty of filtering a half-wave.
The peak detector may be combined with other components to build a crystal radio
3.06: Clipper Circuits
A circuit which removes the peak of a waveform is known as a clipper. A negative clipper is shown in Figure below. This schematic diagram was produced with Xcircuit schematic capture program. Xcircuit produced the SPICE net list Figure below, except for the second, and next to last pair of lines which were inserted with a text editor.
Clipper: clips negative peak at -0.7 V.
During the positive half cycle of the 5 V peak input, the diode is reversed biased. The diode does not conduct. It is as if the diode were not there. The positive half cycle is unchanged at the output V(2) in Figure below. Since the output positive peaks actually overlays the input sinewave V(1), the input has been shifted upward in the plot for clarity. In Nutmeg, the SPICE display module, the command “plot v(1)+1)” accomplishes this.
V(1)+1 is actually V(1), a 10 Vptp sinewave, offset by 1 V for display clarity. V(2) output is clipped at -0.7 V, by diode D1.
During the negative half cycle of sinewave input of Figure above, the diode is forward biased, that is, conducting. The negative half cycle of the sinewave is shorted out. The negative half cycle of V(2) would be clipped at 0 V for an ideal diode. The waveform is clipped at -0.7 V due to the forward voltage drop of the silicon diode. The spice model defaults to 0.7 V unless parameters in the model statement specify otherwise. Germanium or Schottky diodes clip at lower voltages.
Closer examination of the negative clipped peak (Figure above) reveals that it follows the input for a slight period of time while the sinewave is moving toward -0.7 V. The clipping action is only effective after the input sinewave exceeds -0.7 V. The diode is not conducting for the complete half cycle, though, during most of it.
The addition of an anti-parallel diode to the existing diode in Figure above yields the symmetrical clipper in Figure below.
Symmetrical clipper: Anti-parallel diodes clip both positive and negative peak, leaving a ± 0.7 V output.
Diode D1 clips the negative peak at -0.7 V as before. The additional diode D2 conducts for positive half cycles of the sine wave as it exceeds 0.7 V, the forward diode drop. The remainder of the voltage drops across the series resistor. Thus, both peaks of the input sinewave are clipped in Figure below. The net list is in Figure above
Diode D1 clips at -0.7 V as it conducts during negative peaks. D2 conducts for positive peaks, clipping at 0.7V.
The most general form of the diode clipper is shown in Figure below. For an ideal diode, the clipping occurs at the level of the clipping voltage, V1 and V2. However, the voltage sources have been adjusted to account for the 0.7 V forward drop of the real silicon diodes. D1 clips at 1.3V +0.7V=2.0V when the diode begins to conduct. D2 clips at -2.3V -0.7V=-3.0V when D2 conducts.
D1 clips the input sinewave at 2V. D2 clips at -3V.
The clipper in Figure above does not have to clip both levels. To clip at one level with one diode and one voltage source, remove the other diode and source.
The net list is in Figure above. The waveforms in Figure below show the clipping of v(1) at output v(2).
D1 clips the sinewave at 2V. D2 clips at -3V.
There is also a zener diode clipper circuit in the “Zener diode” section. A zener diode replaces both the diode and the DC voltage source.
A practical application of a clipper is to prevent an amplified speech signal from overdriving a radio transmitter in Figure below. Over driving the transmitter generates spurious radio signals which causes interference with other stations. The clipper is a protective measure.
Clipper prevents over driving radio transmitter by voice peaks.
A sinewave may be squared up by overdriving a clipper. Another clipper application is the protection of exposed inputs of integrated circuits. The input of the IC is connected to a pair of diodes as at node “2” of Figure above . The voltage sources are replaced by the power supply rails of the IC. For example, CMOS IC’s use 0V and +5 V. Analog amplifiers might use ±12V for the V1 and V2 sources.
REVIEW
• A resistor and diode driven by an AC voltage source clips the signal observed across the diode.
• A pair of anti-parallel Si diodes clip symmetrically at ±0.7V
• The grounded end of a clipper diode(s) can be disconnected and wired to a DC voltage to clip at an arbitrary level.
• A clipper can serve as a protective measure, preventing a signal from exceeding the clip limits. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/03%3A_Diodes_and_Rectifiers/3.05%3A_Peak_Detector.txt |
What is Clamp Voltage?
What is clamp voltage? And, which peak gets clamped? In the figure below (a) the clamp voltage is 0 V ignoring diode drop, (more exactly 0.7 V with Si diode drop).
In Figure below, the positive peak of V(1) is clamped to the 0 V (0.7 V) clamp level. Why is this? On the first positive half cycle, the diode conducts charging the capacitor left end to +5 V (4.3 V). This is -5 V (-4.3 V) on the right end at V(1,4).
Note the polarity marked on the capacitor in the figure below (a). The right end of the capacitor is -5 V DC (-4.3 V) with respect to ground. It also has an AC 5 V peak sinewave coupled across it from source V(4) to node 1. The sum of the two is a 5 V peak sine riding on a - 5 V DC (-4.3 V) level. The diode only conducts on successive positive excursions of source V(4) if the peak V(4) exceeds the charge on the capacitor. This only happens if the charge on the capacitor drained off due to a load, not shown. The charge on the capacitor is equal to the positive peak of V(4) (less 0.7 diode drop). The AC riding on the negative end, right end, is shifted down. The positive peak of the waveform is clamped to 0 V (0.7 V) because the diode conducts on the positive peak.
Clampers: (a) Positive peak clamped to 0 V. (b) Negative peak clamped to 0 V. (c) Negative peak clamped to 5 V.
V(4) source voltage 5 V peak used in all clampers. V(1) clamper output from the Figure above (a). V(1,4) DC voltage on capacitor in Figure (a). V(2) clamper output from Figure (b). V(3) clamper output from Figure (c).
Suppose the polarity of the diode is reversed as in the figure above (b)? The diode conducts on the negative peak of source V(4). The negative peak is clamped to 0 V (-0.7 V). See V(2) in the figure above.
The most general realization of the clamper is shown in the figure above (c) with the diode connected to a DC reference. The capacitor still charges during the negative peak of the source. Note that the polarities of the AC source and the DC reference are series aiding. Thus, the capacitor charges to the sum to the two, 10 V DC (9.3 V). Coupling the 5 V peak sinewave across the capacitor yields the Figure above V(3), the sum of the charge on the capacitor and the sinewave. The negative peak appears to be clamped to 5 V DC (4.3V), the value of the DC clamp reference (less diode drop).
Describe the waveform if the DC clamp reference is changed from 5 V to 10 V. The clamped waveform will shift up. The negative peak will be clamped to 10 V (9.3). Suppose that the amplitude of the sine wave source is increased from 5 V to 7 V? The negative peak clamp level will remain unchanged. Though, the amplitude of the sinewave output will increase.
Clamper Circuits as DC Restorers
An application of the clamper circuit is as a “DC restorer” in “composite video” circuitry in both television transmitters and receivers. An NTSC (US video standard) video signal “white level” corresponds to a minimum (12.5%) transmitted power. The video “black level” corresponds to a high level (75% of transmitter power. There is a “blacker than black level” corresponding to 100% transmitted power assigned to synchronization signals. The NTSC signal contains both video and synchronization pulses. The problem with the composite video is that its average DC level varies with the scene, dark vs light. The video itself is supposed to vary. However, the sync must always peak at 100%. To prevent the sync signals from drifting with changing scenes, a “DC restorer” clamps the top of the sync pulses to a voltage corresponding to 100% transmitter modulation. [ATCO]
Review
• A capacitively coupled signal alternates about its average DC level (0 V).
• The signal out of a clamper appears the have one peak clamped to a DC voltage. Example: The negative peak is clamped to 0 VDC, the waveform appears to be shifted upward. The polarity of the diode determines which peak is clamped.
• An application of a clamper, or DC restorer, is in clamping the sync pulses of composite video to a voltage corresponding to 100% of transmitter power. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/03%3A_Diodes_and_Rectifiers/3.07%3A_Clamper_Circuits.txt |
We’ll first go over several types of voltage multipliers—voltage doubler (half- and full-wave), voltage tripler, and voltage quadrupler—then make some general notes about voltage multiplier safety and finish up with the Cockcroft-Walton multiplier.
Voltage Doubler
A voltage doubler application is a DC power supply capable of using either a 240 VAC or 120 VAC source. The supply uses a switch selected full-wave bridge to produce about 300 VDC from a 240 VAC source. The 120 V position of the switch rewires the bridge as a doubler producing about 300 VDC from the 120 VAC. In both cases, 300 VDC is produced. This is the input to a switching regulator producing lower voltages for powering, say, a personal computer.
Half-Wave Voltage Doubler
The half-wave voltage doubler in Figure below (a) is composed of two circuits: a clamper at (b) and peak detector (half-wave rectifier) in Figure prior, which is shown in modified form in Figure below (c). C2 has been added to a peak detector (half-wave rectifier).
Half-wave voltage doubler (a) is composed of (b) a clamper and (c) a half-wave rectifier.
Referring to Figure above (b), C2 charges to 5 V (4.3 V considering the diode drop) on the negative half cycle of AC input. The right end is grounded by the conducting D2. The left end is charged at the negative peak of the AC input. This is the operation of the clamper.
During the positive half cycle, the half-wave rectifier comes into play at Figure above (c). Diode D2 is out of the circuit since it is reverse biased. C2 is now in series with the voltage source. Note the polarities of the generator and C2, series aiding. Thus, rectifier D1 sees a total of 10 V at the peak of the sinewave, 5 V from generator and 5 V from C2. D1 conducts waveform v(1) (Figure below), charging C1 to the peak of the sine wave riding on 5 V DC (Figure below v(2)). Waveform v(2) is the output of the doubler, which stabilizes at 10 V (8.6 V with diode drops) after a few cycles of sinewave input.
Full-Wave Voltage Doubler
The full-wave voltage doubler is composed of a pair of series stacked half-wave rectifiers. (Figure below) The corresponding netlist is in Figure below. The bottom rectifier charges C1 on the negative half cycle of input. The top rectifier charges C2 on the positive halfcycle. Each capacitor takes on a charge of 5 V (4.3 V considering diode drop). The output at node 5 is the series total of C1 + C2 or 10 V (8.6 V with diode drops).
Full-wave voltage doubler consists of two half-wave rectifiers operating on alternating polarities.
Note that the output v(5) Figure below reaches full value within one cycle of the input v(2) excursion.
Full-wave voltage doubler: v(2) input, v(3)voltage at mid point, v(5) voltage at output
Figure below illustrates the derivation of the full-wave doubler from a pair of opposite polarity half-wave rectifiers (a). The negative rectifier of the pair is redrawn for clarity (b). Both are combined at (c) sharing the same ground. At (d) the negative rectifier is re-wired to share one voltage source with the positive rectifier. This yields a ±5 V (4.3 V with diode drop) power supply; though, 10 V is measurable between the two outputs. The ground reference point is moved so that +10 V is available with respect to ground.
Full-wave doubler: (a) Pair of doublers, (b) redrawn, (c) sharing the ground, (d) share the same voltage source. (e) move the ground point.
Voltage Tripler
A voltage tripler (Figure below) is built from a combination of a doubler and a half wave rectifier (C3, D3). The half-wave rectifier produces 5 V (4.3 V) at node 3. The doubler provides another 10 V (8.4 V) between nodes 2 and 3. for a total of 15 V (12.9 V) at the output node 2 with respect to ground. The netlist is in Figure below.
Voltage tripler composed of doubler stacked atop a single stage rectifier.
Note that V(3) in Figure below rises to 5 V (4.3 V) on the first negative half cycle. Input v(4) is shifted upward by 5 V (4.3 V) due to 5 V from the half-wave rectifier. And 5 V more at v(1) due to the clamper (C2, D2). D1 charges C1 (waveform v(2)) to the peak value of v(1).
Voltage tripler: v(3) half-wave rectifier, v(4) input+ 5 V, v(1) clamper, v(2) final output.
Voltage Quadrupler
A voltage quadrupler is a stacked combination of two doublers shown in Figure below. Each doubler provides 10 V (8.6 V) for a series total at node 2 with respect to ground of 20 V (17.2 V).
The netlist is in Figure below.
Voltage quadrupler, composed of two doublers stacked in series, with output at node 2.
The waveforms of the quadrupler are shown in Figure below. Two DC outputs are available: v(3), the doubler output, and v(2) the quadrupler output. Some of the intermediate voltages at clampers illustrate that the input sinewave (not shown), which swings by 5 V, is successively clamped at higher levels: at v(5), v(4) and v(1). Strictly v(4) is not a clamper output. It is simply the AC voltage source in series with the v(3) the doubler output. None the less, v(1) is a clamped version of v(4)
Voltage quadrupler: DC voltage available at v(3) and v(2). Intermediate waveforms: Clampers: v(5), v(4), v(1).
Notes on Voltage Multipliers and Line Driven Power Supplies
Some notes on voltage multipliers are in order at this point. The circuit parameters used in the examples (V= 5 V 1 kHz, C=1000 pf) do not provide much current, microamps. Furthermore, load resistors have been omitted. Loading reduces the voltages from those shown. If the circuits are to be driven by a kHz source at low voltage, as in the examples, the capacitors are usually 0.1 to 1.0 µF so that milliamps of current are available at the output. If the multipliers are driven from 50/60 Hz, the capacitor are a few hundred to a few thousand microfarads to provide hundreds of milliamps of output current. If driven from line voltage, pay attention to the polarity and voltage ratings of the capacitors.
Finally, any direct line driven power supply (no transformer) is dangerous to the experimenter and line operated test equipment. Commercial direct driven supplies are safe because the hazardous circuitry is in an enclosure to protect the user. When breadboarding these circuits with electrolytic capacitors of any voltage, the capacitors will explode if the polarity is reversed. Such circuits should be powered up behind a safety shield.
Cockcroft-Walton Multiplier
A voltage multiplier of cascaded half-wave doublers of arbitrary length is known as a Cockcroft-Walton multiplier as shown in Figure below. This multiplier is used when a high voltage at low current is required. The advantage over a conventional supply is that an expensive high voltage transformer is not required– at least not as high as the output.
Cockcroft-Walton x8 voltage multiplier; output at v(8).
The pair of diodes and capacitors to the left of nodes 1 and 2 in Figure above constitute a half-wave doubler. Rotating the diodes by 45o counterclockwise, and the bottom capacitor by 90o makes it look like Figure prior (a). Four of the doubler sections are cascaded to the right for a theoretical x8 multiplication factor. Node 1 has a clamper waveform (not shown), a sinewave shifted up by 1x (5 V). The other odd numbered nodes are sinewaves clamped to successively higher voltages. Node 2, the output of the first doubler, is a 2x DC voltage v(2) in Figure below. Successive even numbered nodes charge to successively higher voltages: v(4), v(6), v(8)
Cockcroft-Walton (x8) waveforms. Output is v(8).
Without diode drops, each doubler yields 2Vin or 10 V, considering two diode drops (10-1.4)=8.6 V is realistic. For a total of 4 doublers one expects 4·8.6=34.4 V out of 40 V.
Consulting Figure above, v(2) is about right; however, v(8) is <30 V instead of the anticipated 34.4 V. The bane of the Cockcroft-Walton multiplier is that each additional stage adds less than the previous stage. Thus, a practical limit to the number of stages exist. It is possible to overcome this limitation with a modification to the basic circuit. [ABR] Also note the time scale of 40 msec compared with 5 ms for previous circuits. It required 40 msec for the voltages to rise to a terminal value for this circuit. The netlist in Figure above has a “.tran 0.010m 50m” command to extend the simulation time to 50 msec; though, only 40 msec is plotted.
The Cockcroft-Walton multiplier serves as a more efficient high voltage source for photomultiplier tubes requiring up to 2000 V. [ABR] Moreover, the tube has numerous dynodes, terminals requiring connection to the lower voltage “even numbered” nodes. The series string of multiplier taps replaces a heat generating resistive voltage divider of previous designs.
An AC line operated Cockcroft-Walton multiplier provides high voltage to “ion generators” for neutralizing electrostatic charge and for air purifiers.
Voltage Multiplier Review:
• A voltage multiplier produces a DC multiple (2,3,4, etc) of the AC peak input voltage.
• The most basic multiplier is a half-wave doubler.
• The full-wave double is a superior circuit as a doubler.
• A tripler is a half-wave doubler and a conventional rectifier stage (peak detector).
• A quadrupler is a pair of half-wave doublers
• A long string of half-wave doublers is known as a Cockcroft-Walton multiplier. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/03%3A_Diodes_and_Rectifiers/3.08%3A_Voltage_Multipliers_%28Doublers%2C_Triplers%2C_Quadruplers%2C_and_More%29.txt |
A popular use of diodes is for the mitigation of inductive “kickback:” the pulses of high voltage produced when direct current through an inductor is interrupted. Take, for example, this simple circuit in Figure below with no protection against inductive kickback.
Inductive kickback: (a) Switch open. (b) Switch closed, electron current flows from battery through coil which has polarity matching battery. Magnetic field stores energy. (c) Switch open, Current still flows in coil due to collapsing magnetic field. Note polarity change on coil. (d) Coil voltage vs time.
When the pushbutton switch is actuated, current goes through the inductor, producing a magnetic field around it. When the switch is de-actuated, its contacts open, interrupting current through the inductor, and causing the magnetic field to rapidly collapse. Because the voltage induced in a coil of wire is directly proportional to the rate of change over time of magnetic flux (Faraday’s Law: e = NdΦ/dt), this rapid collapse of magnetism around the coil produces a high voltage “spike”.
If the inductor in question is an electromagnet coil, such as in a solenoid or relay (constructed for the purpose of creating a physical force via its magnetic field when energized), the effect of inductive “kickback” serves no useful purpose at all. In fact, it is quite detrimental to the switch, as it causes excessive arcing at the contacts, greatly reducing their service life. Of the practical methods for mitigating the high voltage transient created when the switch is opened, none so simple as the so-called commutating diode in Figure below.
Inductive kickback with protection: (a) Switch open. (b)Switch closed, storing energy in magnetic field. (c) Switch open, inductive kickback is shorted by diode.
In this circuit, the diode is placed in parallel with the coil, such that it will be reverse-biased when DC voltage is applied to the coil through the switch. Thus, when the coil is energized, the diode conducts no current in Figure above (b).
However, when the switch is opened, the coil’s inductance responds to the decrease in current by inducing a voltage of reverse polarity, in an effort to maintain current at the same magnitude and in the same direction. This sudden reversal of voltage polarity across the coil forward-biases the diode, and the diode provides a current path for the inductor’s current, so that its stored energy is dissipated slowly rather than suddenly in Figure above (c).
As a result, the voltage induced in the coil by its collapsing magnetic field is quite low: merely the forward voltage drop of the diode, rather than hundreds of volts as before. Thus, the switch contacts experience a voltage drop equal to the battery voltage plus about 0.7 volts (if the diode is silicon) during this discharge time.
In electronics parlance, commutation refers to the reversal of voltage polarity or current direction. Thus, the purpose of a commutating diode is to act whenever voltage reverses polarity, for example, on an inductor coil when current through it is interrupted. A less formal term for a commutating diode is snubber, because it “snubs” or “squelches” the inductive kickback.
A noteworthy disadvantage of this method is the extra time it imparts to the coil’s discharge. Because the induced voltage is clamped to a very low value, its rate of magnetic flux change over time is comparatively slow. Remember that Faraday’s Law describes the magnetic flux rate-of-change (dΦ/dt) as being proportional to the induced, instantaneous voltage (e or v). If the instantaneous voltage is limited to some low figure, then the rate of change of magnetic flux over time will likewise be limited to a low (slow) figure.
If an electromagnet coil is “snubbed” with a commutating diode, the magnetic field will dissipate at a relatively slow rate compared to the original scenario (no diode) where the field disappeared almost instantly upon switch release. The amount of time in question will most likely be less than one second, but it will be measurably slower than without a commutating diode in place. This may be an intolerable consequence if the coil is used to actuate an electromechanical relay, because the relay will possess a natural “time delay” upon coil de-energization, and an unwanted delay of even a fraction of a second may wreak havoc in some circuits.
Unfortunately, one cannot eliminate the high-voltage transient of inductive kickback and maintain fast de-magnetization of the coil: Faraday’s Law will not be violated. However, if slow de-magnetization is unacceptable, a compromise may be struck between transient voltage and time by allowing the coil’s voltage to rise to some higher level (but not so high as without a commutating diode in place). The schematic in Figure below shows how this can be done.
(a) Commutating diode with series resistor. (b) Voltage waveform. (c) Level with no diode. (d) Level with diode, no resistor. (e) Compromise level with diode and resistor.
A resistor placed in series with the commutating diode allows the coil’s induced voltage to rise to a level greater than the diode’s forward voltage drop, thus hastening the process of de-magnetization. This, of course, will place the switch contacts under greater stress, and so the resistor must be sized to limit that transient voltage at an acceptable maximum level. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/03%3A_Diodes_and_Rectifiers/3.09%3A_Inductor_Commutating_Circuits.txt |
Logic
Diodes can perform digital logic functions: AND, and OR. Diode logic was used in early digital computers. It only finds limited application today. Sometimes it is convenient to fashion a single logic gate from a few diodes.
Diode AND gate
An AND gate is shown in Figure above. Logic gates have inputs and an output (Y) which is a function of the inputs. The inputs to the gate are high (logic 1), say 10 V, or low, 0 V (logic 0). In the figure, the logic levels are generated by switches. If a switch is up, the input is effectively high (1). If the switch is down, it connects a diode cathode to ground, which is low (0). The output depends on the combination of inputs at A and B. The inputs and output are customarily recorded in a “truth table” at (c) to describe the logic of a gate. At (a) all inputs are high (1). This is recorded in the last line of the truth table at (c). The output, Y, is high (1) due to the V+ on the top of the resistor. It is unaffected by open switches. At (b) switch A pulls the cathode of the connected diode low, pulling output Y low (0.7 V). This is recorded in the third line of the truth table. The second line of the truth table describes the output with the switches reversed from (b). Switch B pulls the diode and output low. The first line of the truth table recorders the Output=0 for both input low (0). The truth table describes a logical AND function. Summary: both inputs A and B high yields a high (1) out.
A two input OR gate composed of a pair of diodes is shown in Figure below. If both inputs are logic low at (a) as simulated by both switches “downward,” the output Y is pulled low by the resistor. This logic zero is recorded in the first line of the truth table at (c). If one of the inputs is high as at (b), or the other input is high, or both inputs high, the diode(s) conduct(s), pulling the output Y high. These results are reordered in the second through fourth lines of the truth table. Summary: any input “high” is a high out at Y.
OR gate: (a) First line, truth table (TT). (b) Third line TT. (d) Logical OR of power line supply and back-up battery.
A backup battery may be OR-wired with a line operated DC power supply in Figure above (d) to power a load, even during a power failure. With AC power present, the line supply powers the load, assuming that it is a higher voltage than the battery. In the event of a power failure, the line supply voltage drops to 0 V; the battery powers the load. The diodes must be in series with the power sources to prevent a failed line supply from draining the battery, and to prevent it from over charging the battery when line power is available. Does your PC computer retain its BIOS setting when powered off? Does your VCR (video cassette recorder) retain the clock setting after a power failure? (PC Yes, old VCR no, new VCR yes.)
Analog switch
Diodes can switch analog signals. A reverse biased diode appears to be an open circuit. A forward biased diode is a low resistance conductor. The only problem is isolating the AC signal being switched from the DC control signal. The circuit in Figure below is a parallel resonant network: resonant tuning inductor paralleled by one (or more) of the switched resonator capacitors. This parallel LC resonant circuit could be a preselector filter for a radio receiver. It could be the frequency determining network of an oscillator (not shown). The digital control lines may be driven by a microprocessor interface.
Diode switch: A digital control signal (low) selects a resonator capacitor by forward biasing the switching diode.
The large value DC blocking capacitor grounds the resonant tuning inductor for AC while blocking DC. It would have a low reactance compared to the parallel LC reactances. This prevents the anode DC voltage from being shorted to ground by the resonant tuning inductor. A switched resonator capacitor is selected by pulling the corresponding digital control low. This forward biases the switching diode. The DC current path is from +5 V through an RF choke (RFC), a switching diode, and an RFC to ground via the digital control. The purpose of the RFC at the +5 V is to keep AC out of the +5 V supply. The RFC in series with the digital control is to keep AC out of the external control line. The decoupling capacitor shorts the little AC leaking through the RFC to ground, bypassing the external digital control line.
With all three digital control lines high (≥+5 V), no switched resonator capacitors are selected due to diode reverse bias. Pulling one or more lines low, selects one or more switched resonator capacitors, respectively. As more capacitors are switched in parallel with the resonant tuning inductor, the resonant frequency decreases.
The reverse biased diode capacitance may be substantial compared with very high frequency or ultra high frequency circuits. PIN diodes may be used as switches for lower capacitance. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/03%3A_Diodes_and_Rectifiers/3.10%3A_Diode_Switching_Circuits.txt |
What Is a Zener Diode?
A Zener diode is a special type of rectifying diode that can handle breakdown due to reverse breakdown voltage without failing completely. Here we will discuss the concept of using diodes to regulate voltage drop and how the Zener diode operates in reverse-bias mode to regulate voltage in a circuit.
How Diodes Regulate Voltage Drop
If we connect a diode and resistor in series with a DC voltage source so that the diode is forward-biased, the voltage drop across the diode will remain fairly constant over a wide range of power supply voltages as in Figure below (a).
The current through a forward-biased PN junction is proportional to e raised to the power of the forward voltage drop. Because this is an exponential function, current rises quite rapidly for modest increases in voltage drop.
Another way of considering this is to say that voltage dropped across a forward-biased diode changes little for large variations in diode current. In the circuit shown in the figure below (a), diode current is limited by the voltage of the power supply, the series resistor, and the diode’s voltage drop, which as we know doesn’t vary much from 0.7 volts.
Forward biased Si reference: (a) single diode, 0.7V, (b) 10-diodes in series 7.0V.
If the power supply voltage were to be increased, the resistor’s voltage drop would increase almost the same amount, and the diode’s voltage would drop just a little. Conversely, a decrease in power supply voltage would result in an almost equal decrease in resistor voltage drop, with just a little decrease in diode voltage drop.
In a word, we could summarize this behavior by saying that the diode is regulating the voltage drop at approximately 0.7 volts.
The Use of Voltage Regulation
Voltage regulation is a useful diode property to exploit. Suppose we were building some kind of circuit which could not tolerate variations in power supply voltage, but needed to be powered by a chemical battery, whose voltage changes over its lifetime. We could form a circuit as shown above and connect the circuit requiring steady voltage across the diode, where it would receive an unchanging 0.7 volts.
This would certainly work, but most practical circuits of any kind require a power supply voltage in excess of 0.7 volts to properly function. One way we could increase our voltage regulation point would be to connect multiple diodes in series so that their individual forward voltage drops of 0.7 volts each would add to create a larger total.
For instance, in our example above (b), if we had ten diodes in series, the regulated voltage would be ten times 0.7, or 7 volts.
So long as the battery voltage never sagged below 7 volts, there would always be about 7 volts dropped across the ten-diode “stack.”
How Zener Diodes Regulate Voltage
If larger regulated voltages are required, we could either use more diodes in series (an inelegant option, in my opinion) or try a fundamentally different approach.
We know that diode forward voltage is a fairly constant figure under a wide range of conditions, but so is reverse breakdown voltage. Breakdown voltage is typically much, much greater than forward voltage.
If we reversed the polarity of the diode in our single-diode regulator circuit and increased the power supply voltage to the point where the diode “broke down” (that is, it could no longer withstand the reverse-bias voltage impressed across it), the diode would similarly regulate the voltage at that breakdown point, not allowing it to increase further. This is shown in the figure below (a).
(a) Reverse biased Si small-signal diode breaks down at about 100V. (b) Symbol for Zener diode.
Unfortunately, when normal rectifying diodes “break down,” they usually do so destructively. However, it is possible to build a special type of diode that can handle breakdown without failing completely. This type of diode is called a Zener diode, and its symbol is shown in the figure above (b).
When forward-biased, Zener diodes behave much the same as standard rectifying diodes: they have a forward voltage drop which follows the “diode equation” and is about 0.7 volts. In reverse-bias mode, they do not conduct until the applied voltage reaches or exceeds the so-called Zener voltage, at which point the diode is able to conduct substantial current, and in doing so will try to limit the voltage dropped across it to that Zener voltage point. So long as the power dissipated by this reverse current does not exceed the diode’s thermal limits, the diode will not be harmed. For this reason, Zener diodes are sometimes referred to as “breakdown diodes.”
Zener Diode Circuit
Zener diodes are manufactured with Zener voltages ranging anywhere from a few volts to hundreds of volts. This Zener voltage changes slightly with temperature, and like common carbon-composition resistor values, may be anywhere from 5 percent to 10 percent in error from the manufacturer’s specifications. However, this stability and accuracy is generally good enough for the Zener diode to be used as a voltage regulator device in common power supply circuit in Figure below.
Zener diode regulator circuit, Zener voltage = 12.6V
Please take note of the Zener diode’s orientation in the above circuit: the diode is reverse-biased, and intentionally so. If we had oriented the diode in the “normal” way, so as to be forward-biased, it would only drop 0.7 volts, just like a regular rectifying diode. If we want to exploit this diode’s reverse breakdown properties, we must operate it in its reverse-bias mode. So long as the power supply voltage remains above the Zener voltage (12.6 volts, in this example), the voltage dropped across the Zener diode will remain at approximately 12.6 volts.
Like any semiconductor device, the zener diode is sensitive to temperature. Excessive temperature will destroy a zener diode, and because it both drops voltage and conducts current, it produces its own heat in accordance with Joule’s Law (P=IE). Therefore, one must be careful to design the regulator circuit in such a way that the diode’s power dissipation rating is not exceeded. Interestingly enough, when Zener diodes fail due to excessive power dissipation, they usually fail shorted rather than open. A diode failed in this manner is readily detected: it drops almost zero voltage when biased either way, like a piece of wire.
Let’s examine a Zener diode regulating circuit mathematically, determining all voltages, currents, and power dissipations. Taking the same form of circuit shown earlier, we’ll perform calculations assuming a Zener voltage of 12.6 volts, a power supply voltage of 45 volts, and a series resistor value of 1000 Ω (we’ll regard the Zener voltage to be exactly 12.6 volts so as to avoid having to qualify all figures as “approximate” in Figure below (a)
If the Zener diode’s voltage is 12.6 volts and the power supply’s voltage is 45 volts, there will be 32.4 volts dropped across the resistor (45 volts - 12.6 volts = 32.4 volts). 32.4 volts dropped across 1000 Ω gives 32.4 mA of current in the circuit. (Figure below (b))
(a) Zener Voltage regulator with 1000 Ω resistor. (b) Calculation of voltage drops and current.
Power is calculated by multiplying current by voltage (P=IE), so we can calculate power dissipations for both the resistor and the Zener diode quite easily:
A Zener diode with a power rating of 0.5 watts would be adequate, as would a resistor rated for 1.5 or 2 watts of dissipation.
If excessive power dissipation is detrimental, then why not design the circuit for the least amount of dissipation possible? Why not just size the resistor for a very high value of resistance, thus severely limiting current and keeping power dissipation figures very low? Take this circuit, for example, with a 100 kΩ resistor instead of a 1 kΩ resistor. Note that both the power supply voltage and the diode’s Zener voltage in Figure below are identical to the last example:
Zener regulator with 100 kΩ resistor.
With only 1/100 of the current we had before (324 µA instead of 32.4 mA), both power dissipation figures should be 100 times smaller:
Seems ideal, doesn’t it? Less power dissipation means lower operating temperatures for both the diode and the resistor, and also less wasted energy in the system, right? A higher resistance value does reduce power dissipation levels in the circuit, but it, unfortunately, introduces another problem. Remember that the purpose of a regulator circuit is to provide a stable voltage for another circuit. In other words, we’re eventually going to power something with 12.6 volts, and this something will have a current draw of its own. Consider our first regulator circuit, this time with a 500 Ω load connected in parallel with the Zener diode in Figure below.
Zener regulator with 1000 Ω series resistor and 500 Ω load.
If 12.6 volts is maintained across a 500 Ω load, the load will draw 25.2 mA of current. In order for the 1 kΩ series “dropping” resistor to drop 32.4 volts (reducing the power supply’s voltage of 45 volts down to 12.6 across the Zener), it still must conduct 32.4 mA of current. This leaves 7.2 mA of current through the Zener diode.
Now consider our “power-saving” regulator circuit with the 100 kΩ dropping resistor, delivering power to the same 500 Ω load. What it is supposed to do is maintain 12.6 volts across the load, just like the last circuit. However, as we will see, it cannot accomplish this task. (Figure below)
Zener non-regulator with 100 KΩ series resistor with 500 Ω load.
With the larger value of dropping resistor in place, there will only be about 224 mV of voltage across the 500 Ω load, far less than the expected value of 12.6 volts! Why is this? If we actually had 12.6 volts across the load, it would draw 25.2 mA of current, as before. This load current would have to go through the series dropping resistor as it did before, but with a new (much larger!) dropping resistor in place, the voltage dropped across that resistor with 25.2 mA of current going through it would be 2,520 volts! Since we obviously don’t have that much voltage supplied by the battery, this cannot happen.
The situation is easier to comprehend if we temporarily remove the Zener diode from the circuit and analyze the behavior of the two resistors alone in Figure below.
Non-regulator with Zener removed.
Both the 100 kΩ dropping resistor and the 500 Ω load resistance are in series with each other, giving a total circuit resistance of 100.5 kΩ. With a total voltage of 45 volts and a total resistance of 100.5 kΩ, Ohm’s Law (I=E/R) tells us that the current will be 447.76 µA. Figuring voltage drops across both resistors (E=IR), we arrive at 44.776 volts and 224 mV, respectively. If we were to re-install the Zener diode at this point, it would “see” 224 mV across it as well, being in parallel with the load resistance. This is far below the Zener breakdown voltage of the diode and so it will not “break down” and conduct current. For that matter, at this low voltage the diode wouldn’t conduct even if it were forward-biased! Thus, the diode ceases to regulate voltage. At least 12.6 volts must be dropped across to “activate” it.
The analytical technique of removing a Zener diode from a circuit and seeing whether or not enough voltage is present to make it conduct is a sound one. Just because a Zener diode happens to be connected in a circuit doesn’t guarantee that the full Zener voltage will always be dropped across it! Remember that Zener diodes work by limiting voltage to some maximum level; they cannot make up for a lack of voltage.
In summary, any Zener diode regulating circuit will function so long as the load’s resistance is equal to or greater than some minimum value. If the load resistance is too low, it will draw too much current, dropping too much voltage across the series dropping resistor, leaving insufficient voltage across the Zener diode to make it conduct. When the Zener diode stops conducting current, it can no longer regulate voltage, and the load voltage will fall below the regulation point.
Our regulator circuit with the 100 kΩ dropping resistor must be good for some value of load resistance, though. To find this acceptable load resistance value, we can use a table to calculate resistance in the two-resistor series circuit (no diode), inserting the known values of total voltage and dropping resistor resistance, and calculating for an expected load voltage of 12.6 volts:
With 45 volts of total voltage and 12.6 volts across the load, we should have 32.4 volts across Rdropping:
With 32.4 volts across the dropping resistor, and 100 kΩ worth of resistance in it, the current through it will be 324 µA:
Being a series circuit, the current is equal through all components at any given time:
Calculating load resistance is now a simple matter of Ohm’s Law (R = E/I), giving us 38.889 kΩ:
Thus, if the load resistance is exactly 38.889 kΩ, there will be 12.6 volts across it, diode or no diode. Any load resistance smaller than 38.889 kΩ will result in a load voltage less than 12.6 volts, diode or no diode. With the diode in place, the load voltage will be regulated to a maximum of 12.6 volts for any load resistance greater than 38.889 kΩ.
With the original value of 1 kΩ for the dropping resistor, our regulator circuit was able to adequately regulate voltage even for a load resistance as low as 500 Ω. What we see is a tradeoff between power dissipation and acceptable load resistance. The higher-value dropping resistor gave us less power dissipation, at the expense of raising the acceptable minimum load resistance value. If we wish to regulate voltage for low-value load resistances, the circuit must be prepared to handle higher power dissipation.
Zener diodes regulate voltage by acting as complementary loads, drawing more or less current as necessary to ensure a constant voltage drop across the load. This is analogous to regulating the speed of an automobile by braking rather than by varying the throttle position: not only is it wasteful, but the brakes must be built to handle all the engine’s power when the driving conditions don’t demand it. Despite this fundamental inefficiency of design, Zener diode regulator circuits are widely employed due to their sheer simplicity. In high-power applications where the inefficiencies would be unacceptable, other voltage-regulating techniques are applied. But even then, small Zener-based circuits are often used to provide a “reference” voltage to drive a more efficient amplifier circuit controlling the main power.
Zener diodes are manufactured in standard voltage ratings listed in Table below. The table “Common Zener diode voltages” lists common voltages for 0.3W and 1.3W parts. The wattage corresponds to die and package size and is the power that the diode may dissipate without damage.
Zener diode clipper: A clipping circuit which clips the peaks of waveform at approximately the Zener voltage of the diodes. The circuit of Figure below has two Zeners connected series opposing to symmetrically clip a waveform at nearly the Zener voltage. The resistor limits current drawn by the Zeners to a safe value.
The Zener breakdown voltage for the diodes is set at 10 V by the diode model parameter “bv=10” in the spice net list in Figure above. This causes the Zeners to clip at about 10 V. The back-to-back diodes clip both peaks. For a positive half-cycle, the top Zener is reverse biased, breaking down at the Zener voltage of 10 V. The lower Zener drops approximately 0.7 V since it is forward biased. Thus, a more accurate clipping level is 10+0.7=10.7V. Similar negative half-cycle clipping occurs a -10.7 V. (Figure below) shows the clipping level at a little over ±10 V.
Zener diode clipper: v(1) input is clipped at waveform v(2).
Review
• Zener diodes are designed to be operated in reverse-bias mode, providing a relatively low, stable breakdown, or Zener voltage at which they begin to conduct substantial reverse current.
• A Zener diode may function as a voltage regulator by acting as an accessory load, drawing more current from the source if the voltage is too high, and less if it is too low. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/03%3A_Diodes_and_Rectifiers/3.11%3A_What_Are_Zener_Diodes%3F.txt |
Schottky diodes
Schottky diodes are constructed of a metal-to-N junction rather than a P-N semiconductor junction. Also known as hot-carrier diodes, Schottky diodes are characterized by fast switching times (low reverse-recovery time), low forward voltage drop (typically 0.25 to 0.4 volts for a metal-silicon junction), and low junction capacitance.
The schematic symbol for a Schottky diode is shown in Figure below.
Schottky diode schematic symbol.
The forward voltage drop (VF), reverse-recovery time (trr), and junction capacitance (CJ) of Schottky diodes are closer to ideal than the average “rectifying” diode. This makes them well suited for high-frequency applications. Unfortunately, though, Schottky diodes typically have lower forward current (IF) and reverse voltage (VRRM and VDC) ratings than rectifying diodes and are thus unsuitable for applications involving substantial amounts of power. Though they are used in low voltage switching regulator power supplies.
Schottky diode technology finds broad application in high-speed computer circuits, where the fast switching time equates to high speed capability, and the low forward voltage drop equates to less power dissipation when conducting.
Switching regulator power supplies operating at 100’s of kHz cannot use conventional silicon diodes as rectifiers because of their slow switching speed. When the signal applied to a diode changes from forward to reverse bias, conduction continues for a short time, while carriers are being swept out of the depletion region. Conduction only ceases after this tr reverse recovery time has expired. Schottky diodes have a shorter reverse recovery time.
Regardless of switching speed, the 0.7 V forward voltage drop of silicon diodes causes poor efficiency in low voltage supplies. This is not a problem in, say, a 10 V supply. In a 1 V supply the 0.7 V drop is a substantial portion of the output. One solution is to use a schottky power diode which has a lower forward drop.
Tunnel diodes
Tunnel diodes exploit a strange quantum phenomenon called resonant tunneling to provide a negative resistance forward-bias characteristics. When a small forward-bias voltage is applied across a tunnel diode, it begins to conduct current. (Figure below(b)) As the voltage is increased, the current increases and reaches a peak value called the peak current (IP). If the voltage is increased a little more, the current actually begins to decrease until it reaches a low point called the valley current (IV). If the voltage is increased further yet, the current begins to increase again, this time without decreasing into another “valley.” The schematic symbol for the tunnel diode shown in Figure below(a).
Tunnel diode (a) Schematic symbol. (b) Current vs voltage plot (c) Oscillator.
The forward voltages necessary to drive a tunnel diode to its peak and valley currents are known as peak voltage (VP) and valley voltage (VV), respectively. The region on the graph where current is decreasing while applied voltage is increasing (between VP and VV on the horizontal scale) is known as the region of negative resistance.
Tunnel diodes, also known as Esaki diodes in honor of their Japanese inventor Leo Esaki, are able to transition between peak and valley current levels very quickly, “switching” between high and low states of conduction much faster than even Schottky diodes. Tunnel diode characteristics are also relatively unaffected by changes in temperature.
Reverse breakdown voltage versus doping level. After Sze [SGG]
Tunnel diodes are heavily doped in both the P and N regions, 1000 times the level in a rectifier. This can be seen in Figure above. Standard diodes are to the far left, zener diodes near to the left, and tunnel diodes to the right of the dashed line. The heavy doping produces an unusually thin depletion region. This produces an unusually low reverse breakdown voltage with high leakage. The thin depletion region causes high capacitance. To overcome this, the tunnel diode junction area must be tiny. The forward diode characteristic consists of two regions: a normal forward diode characteristic with current rising exponentially beyond VF, 0.3 V for Ge, 0.7 V for Si. Between 0 V and VF is an additional “negative resistance” characteristic peak. This is due to quantum mechanical tunneling involving the dual particle-wave nature of electrons. The depletion region is thin enough compared with the equivalent wavelength of the electron that they can tunnel through. They do not have to overcome the normal forward diode voltage VF. The energy level of the conduction band of the N-type material overlaps the level of the valence band in the P-type region. With increasing voltage, tunneling begins; the levels overlap; current increases, up to a point. As current increases further, the energy levels overlap less; current decreases with increasing voltage. This is the “negative resistance” portion of the curve.
Tunnel diodes are not good rectifiers, as they have relatively high “leakage” current when reverse-biased. Consequently, they find application only in special circuits where their unique tunnel effect has value. To exploit the tunnel effect, these diodes are maintained at a bias voltage somewhere between the peak and valley voltage levels, always in a forward-biased polarity (anode positive, and cathode negative).
Perhaps the most common application of a tunnel diode is in simple high-frequency oscillator circuits as in Figure above(c), where it allows a DC voltage source to contribute power to an LC “tank” circuit, the diode conducting when the voltage across it reaches the peak (tunnel) level and effectively insulating at all other voltages. The resistors bias the tunnel diode at a few tenths of a volt centered on the negative resistance portion of the characteristic curve. The L-C resonant circuit may be a section of a waveguide for microwave operation. Oscillation to 5 GHz is possible.
At one time the tunnel diode was the only solid-state microwave amplifier available. Tunnel diodes were popular starting in the 1960’s. They were longer lived than traveling wave tube amplifiers, an important consideration in satellite transmitters. Tunnel diodes are also resistant to radiation because of the heavy doping. Today various transistors operate at microwave frequencies. Even small signal tunnel diodes are expensive and difficult to find today. There is one remaining manufacturer of germanium tunnel diodes, and none for silicon devices. They are sometimes used in military equipment because they are insensitive to radiation and large temperature changes.
There has been some research involving the possible integration of silicon tunnel diodes into CMOS integrated circuits. They are thought to be capable of switching at 100 GHz in digital circuits. The sole manufacturer of germanium devices produces them one at a time. A batch process for silicon tunnel diodes must be developed, then integrated with conventional CMOS processes. [SZL]
The Esaki tunnel diode should not be confused with the resonant tunneling diode CH 2, of more complex construction from compound semiconductors. The RTD is a more recent development capable of higher speed.
Light-emitting diodes
Diodes, like all semiconductor devices, are governed by the principles described in quantum physics. One of these principles is the emission of specific-frequency radiant energy whenever electrons fall from a higher energy level to a lower energy level. This is the same principle at work in a neon lamp, the characteristic pink-orange glow of ionized neon due to the specific energy transitions of its electrons in the midst of an electric current. The unique color of a neon lamp’s glow is due to the fact that its neon gas inside the tube, and not due to the particular amount of current through the tube or voltage between the two electrodes. Neon gas glows pinkish-orange over a wide range of ionizing voltages and currents. Each chemical element has its own “signature” emission of radiant energy when its electrons “jump” between different, quantized energy levels. Hydrogen gas, for example, glows red when ionized; mercury vapor glows blue. This is what makes spectrographic identification of elements possible.
Electrons flowing through a PN junction experience similar transitions in energy level, and emit radiant energy as they do so. The frequency of this radiant energy is determined by the crystal structure of the semiconductor material, and the elements comprising it. Some semiconductor junctions, composed of special chemical combinations, emit radiant energy within the spectrum of visible light as the electrons change energy levels. Simply put, these junctions glow when forward biased. A diode intentionally designed to glow like a lamp is called a light-emitting diode, or LED.
Forward biased silicon diodes give off heat as electron and holes from the N-type and P-type regions, respectively, recombine at the junction. In a forward biased LED, the recombination of electrons and holes in the active region in Figure (c) below (c) yields photons. This process is known as electroluminescence. To give off photons, the potential barrier through which the electrons fall must be higher than for a silicon diode. The forward diode drop can range to a few volts for some color LEDs.
Diodes made from a combination of the elements gallium, arsenic, and phosphorus (called gallium-arsenide-phosphide) glow bright red, and are some of the most common LEDs manufactured. By altering the chemical constituency of the PN junction, different colors may be obtained. Early generations of LEDs were red, green, yellow, orange, and infra-red, later generations included blue and ultraviolet, with violet being the latest color added to the selection. Other colors may be obtained by combining two or more primary-color (red, green, and blue) LEDs together in the same package, sharing the same optical lens. This allowed for multicolor LEDs, such as tricolor LEDs (commercially available in the 1980’s) using red and green (which can create yellow) and later RGB LEDs (red, green, and blue), which cover the entire color spectrum.
The schematic symbol for an LED is a regular diode shape inside of a circle, with two small arrows pointing away (indicating emitted light), shown in Figure (a) below.
LED, Light Emitting Diode: (a) schematic symbol. (b) Flat side and short lead of device correspond to cathode, as well as the internal arrangement of the cathode. (c) Cross section of Led die.
This notation of having two small arrows pointing away from the device is common to the schematic symbols of all light-emitting semiconductor devices. Conversely, if a device is light-activated (meaning that incoming light stimulates it), then the symbol will have two small arrows pointing toward it. LEDs can sense light. They generate a small voltage when exposed to light, much like a solar cell on a small scale. This property can be gainfully applied in a variety of light-sensing circuits.
Because LEDs are made of different chemical substances than silicon diodes, their forward voltage drops will be different. Typically, LEDs have much larger forward voltage drops than rectifying diodes, anywhere from about 1.6 volts to over 3 volts, depending on the color. Typical operating current for a standard-sized LED is around 20 mA. When operating an LED from a DC voltage source greater than the LED’s forward voltage, a series-connected “dropping” resistor must be included to prevent full source voltage from damaging the LED. Consider the example circuit in Figure (a) below (a) using a 6 V source.
Setting LED current at 20 ma. (a) for a 6 V source, (b) for a 24 V source.
With the LED dropping 1.6 volts, there will be 4.4 volts dropped across the resistor. Sizing the resistor for an LED current of 20 mA is as simple as taking its voltage drop (4.4 volts) and dividing by circuit current (20 mA), in accordance with Ohm’s Law (R=E/I). This gives us a figure of 220 Ω. Calculating power dissipation for this resistor, we take its voltage drop and multiply by its current (P=IE), and end up with 88 mW, well within the rating of a 1/8 watt resistor. Higher battery voltages will require larger-value dropping resistors, and possibly higher-power rating resistors as well. Consider the example in Figure (b) above for a supply voltage of 24 volts:
Here, the dropping resistor must be increased to a size of 1.12 kΩ to drop 22.4 volts at 20 mA so that the LED still receives only 1.6 volts. This also makes for a higher resistor power dissipation: 448 mW, nearly one-half a watt of power! Obviously, a resistor rated for 1/8 watt power dissipation or even 1/4 watt dissipation will overheat if used here.
Dropping resistor values need not be precise for LED circuits. Suppose we were to use a 1 kΩ resistor instead of a 1.12 kΩ resistor in the circuit shown above. The result would be a slightly greater circuit current and LED voltage drop, resulting in a brighter light from the LED and slightly reduced service life. A dropping resistor with too much resistance (say, 1.5 kΩ instead of 1.12 kΩ) will result in less circuit current, less LED voltage, and a dimmer light. LEDs are quite tolerant of variation in applied power, so you need not strive for perfection in sizing the dropping resistor.
Multiple LEDs are sometimes required, say in lighting. If LEDs are operated in parallel, each must have its own current limiting resistor as in Figure (a) below to ensure currents dividing more equally. However, it is more efficient to operate LEDs in series (Figure (b) below with a single dropping resistor. As the number of series LEDs increases the series resistor value must decrease to maintain current, to a point. The number of LEDs in series (Vf) cannot exceed the capability of the power supply. Multiple series strings may be employed as in Figure (c) below.
In spite of equalizing the currents in multiple LEDs, the brightness of the devices may not match due to variations in the individual parts. Parts can be selected for brightness matching for critical applications.
Multiple LEDs: (a) In parallel, (b) in series, (c) series-parallel
Also because of their unique chemical makeup, LEDs have much, much lower peak-inverse voltage (PIV) ratings than ordinary rectifying diodes. A typical LED might only be rated at 5 volts in reverse-bias mode. Therefore, when using alternating current to power an LED, connect a protective rectifying diode anti-parallel with the LED to prevent reverse breakdown every other half-cycle as in Figure (a) below.
Driving an LED with AC
The anti-parallel diode in Figure (a) above can be replaced with an anti-parallel LED. The resulting pair of anti-parallel LED’s illuminate on alternating half-cycles of the AC sinewave. This configuration draws 20 ma, splitting it equally between the LED’s on alternating AC half cycles. Each LED only receives 10 mA due to this sharing. The same is true of the LED anti-parallel combination with a rectifier. The LED only receives 10 ma. If 20 mA was required for the LED(s), The resistor value could be halved.
The forward voltage drop of LED’s is inversely proportional to the wavelength (λ). As wavelength decreases going from infrared to visible colors to ultraviolet, Vf increases. While this trend is most obvious in the various devices from a single manufacturer, The voltage range for a particular color LED from various manufacturers varies. This range of voltages is shown in Table below.
As lamps, LEDs are superior to incandescent bulbs in many ways. First and foremost is efficiency: LEDs output far more light power per watt of electrical input than an incandescent lamp. This is a significant advantage if the circuit in question is battery-powered, efficiency translating to longer battery life. Second is the fact that LEDs are far more reliable, having a much greater service life than incandescent lamps. This is because LEDs are “cold” devices: they operate at much cooler temperatures than an incandescent lamp with a white-hot metal filament, susceptible to breakage from mechanical and thermal shock. Third is the high speed at which LEDs may be turned on and off. This advantage is also due to the “cold” operation of LEDs: they don’t have to overcome thermal inertia in transitioning from off to on or vice versa. For this reason, LEDs are used to transmit digital (on/off) information as pulses of light, conducted in empty space or through fiber-optic cable, at very high rates of speed (millions of pulses per second).
LEDs excel in monochromatic lighting applications like traffic signals and automotive tail lights. Incandescents are abysmal in this application since they require filtering, decreasing efficiency. LEDs do not require filtering.
One major disadvantage of using LEDs as sources of illumination is their monochromatic (single-color) emission. No one wants to read a book under the light of a red, green, or blue LED. However, if used in combination, LED colors may be mixed for a more broad-spectrum glow. A new broad spectrum light source is the white LED. While small white panel indicators have been available for many years, illumination grade devices are still in development.
A white LED is a blue LED exciting a phosphor which emits yellow light. The blue plus yellow approximates white light. The nature of the phosphor determines the characteristics of the light. A red phosphor may be added to improve the quality of the yellow plus blue mixture at the expense of efficiency. Table above compares white illumination LEDs to expected future devices and other conventional lamps. Efficiency is measured in lumens of light output per watt of input power. If the 50 lumens/watt device can be improved to 100 lumens/watt, white LEDs will be comparable to compact fluorescent lamps in efficiency.
LEDs in general have been a major subject of R&D since the 1960’s. Because of this it is impractical to cover all geometries, chemistries, and characteristics that have been created over the decades. The early devices were relatively dim and took moderate currents. The efficiencies have been improved in later generations to the point it is hazardous to look closely and directly into an illuminated LED. This can result in eye damage, and the LEDs only required a minor increase in dropping voltage (Vf) and current. Modern high intensity devices have reached 180 lumens using 0.7 Amps (82 lumens/watt, Luxeon Rebel series cool white), and even higher intensity models can use even higher currents with a corresponding increase in brightness. Other developments, such as quantum dots, are the subject of current research, so expect to see new things for these devices in the future.
Laser diodes
The laser diode is a further development upon the regular light-emitting diode, or LED. The term “laser” itself is actually an acronym, despite the fact its often written in lower-case letters. “Laser” stands for Light Amplification by Stimulated Emission of Radiation, and refers to another strange quantum process whereby characteristic light emitted by electrons falling from high-level to low-level energy states in a material stimulate other electrons in a substance to make similar “jumps,” the result being a synchronized output of light from the material. This synchronization extends to the actual phase of the emitted light, so that all light waves emitted from a “lasing” material are not just the same frequency (color), but also the same phase as each other, so that they reinforce one another and are able to travel in a very tightly-confined, nondispersing beam. This is why laser light stays so remarkably focused over long distances: each and every light wave coming from the laser is in step with each other.
(a) White light of many wavelengths. (b) Mono-chromatic LED light, a single wavelength. (c) Phase coherent laser light.
Incandescent lamps produce “white” (mixed-frequency, or mixed-color) light as in Figure above (a). Regular LEDs produce monochromatic light: same frequency (color), but different phases, resulting in similar beam dispersion in Figure above (b). Laser LEDs produce coherent light: light that is both monochromatic (single-color) and monophasic (single-phase), resulting in precise beam confinement as in Figure above (c).
Laser light finds wide application in the modern world: everything from surveying, where a straight and nondispersing light beam is very useful for precise sighting of measurement markers, to the reading and writing of optical disks, where only the narrowness of a focused laser beam is able to resolve the microscopic “pits” in the disk’s surface comprising the binary 1’s and 0’s of digital information.
Some laser diodes require special high-power “pulsing” circuits to deliver large quantities of voltage and current in short bursts. Other laser diodes may be operated continuously at lower power. In the continuous laser, laser action occurs only within a certain range of diode current, necessitating some form of current-regulator circuit. As laser diodes age, their power requirements may change (more current required for less output power), but it should be remembered that low-power laser diodes, like LEDs, are fairly long-lived devices, with typical service lives in the tens of thousands of hours.
Photodiodes
A photodiode is a diode optimized to produce an electron current flow in response to irradiation by ultraviolet, visible, or infrared light. Silicon is most often used to fabricate photodiodes; though, germanium and gallium arsenide can be used. The junction through which light enters the semiconductor must be thin enough to pass most of the light on to the active region (depletion region) where light is converted to electron hole pairs.
In Figure below a shallow P-type diffusion into an N-type wafer produces a PN junction near the surface of the wafer. The P-type layer needs to be thin to pass as much light as possible. A heavy N+ diffusion on the back of the wafer makes contact with metalization. The top metalization may be a fine grid of metallic fingers on the top of the wafer for large cells. In small photodiodes, the top contact might be a sole bond wire contacting the bare P-type silicon top.
Photodiode: Schematic symbol and cross section.
The intensity of the light entering the top of the photodiode stack falls off exponentially as a function of depth. A thin top P-type layer allows most photons to pass into the depletion region where electron-hole pairs are formed. The electric field across the depletion region due to the built in diode potential causes electrons to be swept into the N-layer, holes into the P-layer. Actually electron-hole pairs may be formed in any of the semiconductor regions. However, those formed in the depletion region are most likely to be separated into the respective N and P-regions. Many of the electron-hole pairs formed in the P and N-regions recombine. Only a few do so in the depletion region. Thus, a few electron-hole pairs in the N and P-regions, and most in the depletion region contribute to photocurrent, that current resulting from light falling on the photodiode.
The voltage out of a photodiode may be observed. Operation in this photovoltaic (PV) mode is not linear over a large dynamic range, though it is sensitive and has low noise at frequencies less than 100 kHz. The preferred mode of operation is often photocurrent (PC) mode because the current is linearly proportional to light flux over several decades of intensity, and higher frequency response can be achieved. PC mode is achieved with reverse bias or zero bias on the photodiode. A current amplifier (transimpedance amplifier) should be used with a photodiode in PC mode. Linearity and PC mode are achieved as long as the diode does not become forward biased.
High speed operation is often required of photodiodes, as opposed to solar cells. Speed is a function of diode capacitance, which can be minimized by decreasing cell area. Thus, a sensor for a high speed fiber optic link will use an area no larger than necessary, say 1 mm2. Capacitance may also be decreased by increasing the thickness of the depletion region, in the manufacturing process or by increasing the reverse bias on the diode.
PIN diode The p-i-n diode or PIN diode is a photodiode with an intrinsic layer between the P and N-regions as in Figure below. The P-Intrinsic-N structure increases the distance between the P and N conductive layers, decreasing capacitance, increasing speed. The volume of the photo sensitive region also increases, enhancing conversion efficiency. The bandwidth can extend to 10’s of gHz. PIN photodiodes are the preferred for high sensitivity, and high speed at moderate cost.
PIN photodiode: The intrinsic region increases the thickness of the depletion region.
Avalanche photo diode:An avalanche photodiode (APD)designed to operate at high reverse bias exhibits an electron multiplier effect analogous to a photomultiplier tube. The reverse bias can run from 10’s of volts to nearly 2000 V. The high level of reverse bias accelerates photon created electron-hole pairs in the intrinsic region to a high enough velocity to free additional carriers from collisions with the crystal lattice. Thus, many electrons per photon result. The motivation for the APD is to achieve amplification within the photodiode to overcome noise in external amplifiers. This works to some extent. However, the APD creates noise of its own. At high speed the APD is superior to a PIN diode amplifier combination, though not for low speed applications. APD’s are expensive, roughly the price of a photomultiplier tube. So, they are only competitive with PIN photodiodes for niche applications. One such application is single photon counting as applied to nuclear physics.
Solar cells
A photodiode optimized for efficiently delivering power to a load is the solar cell. It operates in photovoltaic mode (PV) because it is forward biased by the voltage developed across the load resistance.
Monocrystalline solar cells are manufactured in a process similar to semiconductor processing. This involves growing a single crystal boule from molten high purity silicon (P-type), though, not as high purity as for semiconductors. The boule is diamond sawed or wire sawed into wafers. The ends of the boule must be discarded or recycled, and silicon is lost in the saw kerf. Since modern cells are nearly square, silicon is lost in squaring the boule. Cells may be etched to texture (roughen) the surface to help trap light within the cell. Considerable silicon is lost in producing the 10 or 15 cm square wafers. These days (2007) it is common for solar cell manufacturer to purchase the wafers at this stage from a supplier to the semiconductor industry.
P-type Wafers are loaded back-to-back into fused silica boats exposing only the outer surface to the N-type dopant in the diffusion furnace. The diffusion process forms a thin n-type layer on the top of the cell. The diffusion also shorts the edges of the cell front to back. The periphery must be removed by plasma etching to unshort the cell. Silver and or aluminum paste is screened on the back of the cell, and a silver grid on the front. These are sintered in a furnace for good electrical contact. (Figure below)
The cells are wired in series with metal ribbons. For charging 12 V batteries, 36 cells at approximately 0.5 V are vacuum laminated between glass, and a polymer metal back. The glass may have a textured surface to help trap light.
Silicon Solar cell
The ultimate commercial high efficiency (21.5%) single crystal silicon solar cells have all contacts on the back of the cell. The active area of the cell is increased by moving the top (-) contact conductors to the back of the cell. The top (-) contacts are normally made to the N-type silicon on top of the cell. In Figure below the (-) contacts are made to N+ diffusions on the bottom interleaved with (+) contacts. The top surface is textured to aid in trapping light within the cell.
High efficiency solar cell with all contacts on the back. Adapted from Figure 1
Multicyrstalline silicon cells start out as molten silicon cast into a rectangular mold. As the silicon cools, it crystallizes into a few large (mm to cm sized) randomly oriented crystals instead of a single one. The remainder of the process is the same as for single crystal cells. The finished cells show lines dividing the individual crystals, as if the cells were cracked. The high efficiency is not quite as high as single crystal cells due to losses at crystal grain boundaries. The cell surface cannot be roughened by etching due to the random orientation of the crystals. However, an antireflectrive coating improves efficiency. These cells are competitive for all but space applications.
Three layer cell: The highest efficiency solar cell is a stack of three cells tuned to absorb different portions of the solar spectrum. Though three cells can be stacked atop one another, a monolithic single crystal structure of 20 semiconductor layers is more compact. At 32 % efficiency, it is now (2007) favored over silicon for space application. The high cost prevents it from finding many earth bound applications other than concentrators based on lenses or mirrors.
Intensive research has recently produced a version enhanced for terrestrial concentrators at 400 - 1000 suns and 40.7% efficiency. This requires either a big inexpensive Fresnel lens or reflector and a small area of the expensive semiconductor. This combination is thought to be competitive with inexpensive silicon cells for solar power plants. [RRK] [LZy]
Metal organic chemical vapor deposition (MOCVD) deposits the layers atop a P-type germanium substrate. The top layers of N and P-type gallium indium phosphide (GaInP) having a band gap of 1.85 eV, absorbs ultraviolet and visible light. These wavelengths have enough energy to exceed the band gap. Longer wavelengths (lower energy) do not have enough energy to create electron-hole pairs, and pass on through to the next layer. A gallium arsenide layers having a band gap of 1.42 eV, absorbs near infrared light. Finally the germanium layer and substrate absorb far infrared. The series of three cells produce a voltage which is the sum of the voltages of the three cells. The voltage developed by each material is 0.4 V less than the band gap energy listed in Table below. For example, for GaInP: 1.8 eV/e - 0.4 V = 1.4 V. For all three the voltage is 1.4 V + 1.0 V + 0.3 V = 2.7 V.
Crystalline solar cell arrays have a long usable life. Many arrays are guaranteed for 25 years, and believed to be good for 40 years. They do not suffer initial degradation compared with amorphous silicon.
Both single and multicrystalline solar cells are based on silicon wafers. The silicon is both the substrate and the active device layers. Much silicon is consumed. This kind of cell has been around for decades, and takes approximately 86% of the solar electric market. For further information about crystalline solar cells see Honsberg. [CHS]
Amorphous silicon thin film solar cells use tiny amounts of the active raw material, silicon. Approximately half the cost of conventional crystalline solar cells is the solar cell grade silicon. The thin film deposition process reduces this cost. The downside is that efficiency is about half that of conventional crystalline cells. Moreover, efficiency degrades by 15-35% upon exposure to sunlight. A 7% efficient cell soon ages to 5% efficiency. Thin film amorphous silicon cells work better than crystalline cells in dim light. They are put to good use in solar powered calculators.
Non-silicon based solar cells make up about 7% of the market. These are thin-film polycrystalline products. Various compound semiconductors are the subject of research and development. Some non-silicon products are in production. Generally, the efficiency is better than amorphous silicon, but not nearly as good as crystalline silicon.
Cadmium telluride as a polycrystalline thin film on metal or glass can have a higher efficiency than amorphous silicon thin films. If deposited on metal, that layer is the negative contact to the cadmium telluride thin film. The transparent P-type cadmium sulfide atop the cadmium telluride serves as a buffer layer. The positive top contact is transparent, electrically conductive fluorine doped tin oxide. These layers may be laid down on a sacrificial foil in place of the glass in the process in the following pargraph. The sacrificial foil is removed after the cell is mounted to a permanent substrate.
Cadmium telluride solar cell on glass or metal.
A process for depositing cadmium telluride on glass begins with the deposition of N-type transparent, electrically conducive, tin oxide on a glass substrate. The next layer is P-type cadmium telluride; though, N-type or intrinsic may be used. These two layers constitute the NP junction. A P+ (heavy P-type) layer of lead telluride aids in establishing a low resistance contact. A metal layer makes the final contact to the lead telluride. These layers may be laid down by vacuum deposition, chemical vapor deposition (CVD), screen printing, electrodeposition, or atmospheric pressure chemical vapor deposition (APCVD) in helium. [KWM]
A variation of cadmium telluride is mercury cadmium telluride. Having lower bulk resistance and lower contact resistance improves efficiency over cadmium telluride.
Cadmium Indium Gallium diSelenide solar cell (CIGS)
Cadmium Indium Gallium diSelenide: A most promising thin film solar cell at this time (2007) is manufactured on a ten inch wide roll of flexible polyimide– Cadmium Indium Gallium diSelenide (CIGS). It has a spectacular efficiency of 10%. Though, commercial grade crystalline silicon cells surpassed this decades ago, CIGS should be cost competitive. The deposition processes are at a low enough temperature to use a polyimide polymer as a substrate instead of metal or glass. (Figure above) The CIGS is manufactured in a roll to roll process, which should drive down costs. GIGS cells may also be produced by an inherently low cost electrochemical process. [EET]
REVIEW:
• Most solar cells are silicon single crystal or multicrystal because of their good efficiency and moderate cost.
• Less efficient thin films of various amorphous or polycrystalline materials comprise the rest of the market.
• Table below compares selected solar cells.
Varicap or varactor diodes
A variable capacitance diode is known as a varicap diode or as a varactor. If a diode is reverse biased, an insulating depletion region forms between the two semiconductive layers. In many diodes the width of the depletion region may be changed by varying the reverse bias. This varies the capacitance. This effect is accentuated in varicap diodes. The schematic symbols is shown in Figure below, one of which is packaged as common cathode dual diode.
Varicap diode: Capacitance varies with reverse bias. This varies the frequency of a resonant network.
If a varicap diode is part of a resonant circuit as in Figure above, the frequency may be varied with a control voltage, Vcontrol. A large capacitance, low Xc, in series with the varicap prevents Vcontrol from being shorted out by inductor L. As long as the series capacitor is large, it has minimal effect on the frequency of resonant circuit. Coptional may be used to set the center resonant frequency. Vcontrol can then vary the frequency about this point. Note that the required active circuitry to make the resonant network oscillate is not shown. For an example of a varicap diode tuned AM radio receiver see “electronic varicap diode tuning,” Ch 9
Some varicap diodes may be referred to as abrupt, hyperabrupt, or super hyper abrupt. These refer to the change in junction capacitance with changing reverse bias as being abrupt or hyper-abrupt, or super hyperabrupt. These diodes offer a relatively large change in capacitance. This is useful when oscillators or filters are swept over a large frequency range. Varying the bias of abrupt varicaps over the rated limits, changes capacitance by a 4:1 ratio, hyperabrupt by 10:1, super hyperabrupt by 20:1.
Varactor diodes may be used in frequency multiplier circuits. See “Practical analog semiconductor circuits,” Varactor multiplier
Snap diode
The snap diode, also known as the step recovery diode is designed for use in high ratio frequency multipliers up to 20 gHz. When the diode is forward biased, charge is stored in the PN junction. This charge is drawn out as the diode is reverse biased. The diode looks like a low impedance current source during forward bias. When reverse bias is applied it still looks like a low impedance source until all the charge is withdrawn. It then “snaps” to a high impedance state causing a voltage impulse, rich in harmonics. An applications is a comb generator, a generator of many harmonics. Moderate power 2x and 4x multipliers are another application.
PIN diodes
A PIN diode is a fast low capacitance switching diode. Do not confuse a PIN switching diode with a PIN photo diode. A PIN diode is manufactured like a silicon switching diode with an intrinsic region added between the PN junction layers. This yields a thicker depletion region, the insulating layer at the junction of a reverse biased diode. This results in lower capacitance than a reverse biased switching diode.
Pin diode: Cross section aligned with schematic symbol.
PIN diodes are used in place of switching diodes in radio frequency (RF) applications, for example, a T/R switch. The 1n4007 1000 V, 1 A general purpose power diode is reported to be usable as a PIN switching diode. The high voltage rating of this diode is achieved by the inclusion of an intrinsic layer dividing the PN junction. This intrinsic layer makes the 1n4007 a PIN diode. Another PIN diode application is as the antenna switch here for a direction finder receiver.
PIN diodes serve as variable resistors when the forward bias is varied. One such application is the voltage variable attenuator. The low capacitance characteristic of PIN diodes, extends the frequency flat response of the attenuator to microwave frequencies.
IMPATT diode
IMPact Avalanche Transit Time diode is a high power radio frequency (RF) generator operating from 3 to 100 gHz. IMPATT diodes are fabricated from silicon, gallium arsenide, or silicon carbide.
An IMPATT diode is reverse biased above the breakdown voltage. The high doping levels produce a thin depletion region. The resulting high electric field rapidly accelerates carriers which free other carriers in collisions with the crystal lattice. Holes are swept into the P+ region. Electrons drift toward the N regions. The cascading effect creates an avalanche current which increases even as voltage across the junction decreases. The pulses of current lag the voltage peak across the junction. A “negative resistance” effect in conjunction with a resonant circuit produces oscillations at high power levels (high for semiconductors).
IMPATT diode: Oscillator circuit and heavily doped P and N layers.
The resonant circuit in the schematic diagram of Figure above is the lumped circuit equivalent of a waveguide section, where the IMPATT diode is mounted. DC reverse bias is applied through a choke which keeps RF from being lost in the bias supply. This may be a section of waveguide known as a bias Tee. Low power RADAR transmitters may use an IMPATT diode as a power source. They are too noisy for use in the receiver. [ YMCW]
Gunn diode
Diode, gunn Gunn diode
A gunn diode is solely composed of N-type semiconductor. As such, it is not a true diode. Figure below shows a lightly doped N- layer surrounded by heavily doped N+ layers. A voltage applied across the N-type gallium arsenide gunn diode creates a strong electric field across the lightly doped N- layer.
Gunn diode: Oscillator circuit and cross section of only N-type semiconductor diode.
As voltage is increased, conduction increases due to electrons in a low energy conduction band. As voltage is increased beyond the threshold of approximately 1 V, electrons move from the lower conduction band to the higher energy conduction band where they no longer contribute to conduction. In other words, as voltage increases, current decreases, a negative resistance condition. The oscillation frequency is determined by the transit time of the conduction electrons, which is inversely related to the thickness of the N- layer.
The frequency may be controlled to some extent by embedding the gunn diode into a resonant circuit. The lumped circuit equivalent shown in Figure above is actually a coaxial transmission line or waveguide. Gallium arsenide gunn diodes are available for operation from 10 to 200 gHz at 5 to 65 mw power. Gunn diodes may also serve as amplifiers. [CHW] [IAP]
Shockley diode
The Shockley diode is a 4-layer thyristor used to trigger larger thyristors. It only conducts in one direction when triggered by a voltage exceeding the breakover voltage, about 20 V. See “Thyristors,” The Shockley Diode. The bidirectional version is called a diac. See “Thyristors,” The DIAC.
Constant-current diodes
A constant-current diode, also known as a current-limiting diode, or current-regulating diode, does exactly what its name implies: it regulates current through it to some maximum level. The constant current diode is a two terminal version of a JFET. If we try to force more current through a constant-current diode than its current-regulation point, it simply “fights back” by dropping more voltage. If we were to build the circuit in Figure below(a) and plot diode current against diode voltage, we’d get a graph that rises at first and then levels off at the current regulation point as in Figure below(b).
Constant current diode: (a) Test circuit, (b) current vs voltage characteristic.
One application for a constant-current diode is to automatically limit current through an LED or laser diode over a wide range of power supply voltages as in Figure below.
Constant current diode application: driving laser diode.
Of course, the constant-current diode’s regulation point should be chosen to match the LED or laser diode’s optimum forward current. This is especially important for the laser diode, not so much for the LED, as regular LEDs tend to be more tolerant of forward current variations. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/03%3A_Diodes_and_Rectifiers/3.12%3A_Special-purpose_Diodes.txt |
SiC diodes
Diodes manufactured from silicon carbide are capable of high temperature operation to 400oC. This could be in a high temperature environment: down hole oil well logging, gas turbine engines, auto engines. Or, operation in a moderate environment at high power dissipation. Nuclear and space applications are promising as SiC is 100 times more resistant to radiation compared with silicon. SiC is a better conductor of heat than any metal. Thus, SiC is better than silicon at conducting away heat. Breakdown voltage is several kV. SiC power devices are expected to reduce electrical energy losses in the power industry by a factor of 100.
Polymer diode
Diodes based on organic chemicals have been produced using low temperature processes. Hole rich and electron rich conductive polymers may be ink jet printed in layers. Most of the research and development is of the organic LED (OLED). However, development of inexpensive printable organic RFID (radio frequency identification) tags is on going. In this effort, a pentacene organic rectifier has been operated at 50 MHz. Rectification to 800 MHz is a development goal. An inexpensive metal insulator metal (MIM) diode acting like a back-to-back zener diode clipper has been delveloped. Also, a tunnel diode like device has been fabricated.
3.14: SPICE Models
The SPICE circuit simulation program provides for modeling diodes in circuit simulations. The diode model is based on characterization of individual devices as described in a product data sheet and manufacturing process characteristics not listed. Some information has been extracted from a 1N4004 data sheet in Figure below.
Data sheet 1N4004 excerpt, after [DI4].
The diode statement begins with a diode element name which must begin with “d” plus optional characters. Example diode element names include: d1, d2, dtest, da, db, d101. Two node numbers specify the connection of the anode and cathode, respectively, to other components. The node numbers are followed by a model name, referring to a subsequent “.model” statement.
The model statement line begins with “.model,” followed by the model name matching one or more diode statements. Next, a “d” indicates a diode is being modeled. The remainder of the model statement is a list of optional diode parameters of the form ParameterName=ParameterValue. None are used in Example below. Example2 has some parameters defined. For a list of diode parameters, see Table below.
The easiest approach to take for a SPICE model is the same as for a data sheet: consult the manufacturer’s web site. Table below lists the model parameters for some selected diodes. A fallback strategy is to build a SPICE model from those parameters listed on the data sheet. A third strategy, not considered here, is to take measurements of an actual device. Then, calculate, compare and adjust the SPICE parameters to the measurements.
If diode parameters are not specified as in “Example” model above, the parameters take on the default values listed in Table above and Table below. These defaults model integrated circuit diodes. These are certainly adequate for preliminary work with discrete devices For more critical work, use SPICE models supplied by the manufacturer [DIn], SPICE vendors, and other sources. [smi]
Otherwise, derive some of the parameters from the data sheet. First select a value for spice parameter N between 1 and 2. It is required for the diode equation (n). Massobrio [PAGM] pp 9, recommends “.. n, the emission coefficient is usually about 2.” In Table above, we see that power rectifiers 1N3891 (12 A), and 10A04 (10 A) both use about 2. The first four in the table are not relevant because they are schottky, schottky, germanium, and silicon small signal, respectively. The saturation current, IS, is derived from the diode equation, a value of (VD, ID) on the graph in Figure above, and N=2 (n in the diode equation).
The numerical values of IS=18.8n and N=2 are entered in last line of Table above for comparison to the manufacturers model for 1N4004, which is considerably different. RS defaults to 0 for now. It will be estimated later. The important DC static parameters are N, IS, and RS. Rashid [MHR] suggests that TT, τD, the transit time, be approximated from the reverse recovery stored charge QRR, a data sheet parameter (not available on our data sheet) and IF, forward current.
We take the TT=0 default for lack of QRR. Though it would be reasonable to take TT for a similar rectifier like the 10A04 at 4.32u. The 1N3891 TT is not a valid choice because it is a fast recovery rectifier. CJO, the zero bias junction capacitance is estimated from the VR vs CJ graph in Figure above. The capacitance at the nearest to zero voltage on the graph is 30 pF at 1 V. If simulating high speed transient response, as in switching regulator power supplies, TT and CJO parameters must be provided.
The junction grading coefficient M is related to the doping profile of the junction. This is not a data sheet item. The default is 0.5 for an abrupt junction. We opt for M=0.333 corresponding to a linearly graded junction. The power rectifiers in Table above use lower values for M than 0.5.
We take the default values for VJ and EG. Many more diodes use VJ=0.6 than shown in Table above. However the 10A04 rectifier uses the default, which we use for our 1N4004 model (Da1N4001 in Table above). Use the default EG=1.11 for silicon diodes and rectifiers. Table above lists values for schottky and germanium diodes. Take the XTI=3, the default IS temperature coefficient for silicon devices. See Table above for XTI for schottky diodes.
The abbreviated data sheet, Figure above, lists IR = 5 µA @ VR = 400 V, corresponding to IBV=5u and BV=400 respectively. The 1n4004 SPICE parameters derived from the data sheet are listed in the last line of Table above for comparison to the manufacturer’s model listed above it. BV is only necessary if the simulation exceeds the reverse breakdown voltage of the diode, as is the case for zener diodes. IBV, reverse breakdown current, is frequently omitted, but may be entered if provided with BV.
Figure below shows a circuit to compare the manufacturers model, the model derived from the datasheet, and the default model using default parameters. The three dummy 0 V sources are necessary for diode current measurement. The 1 V source is swept from 0 to 1.4 V in 0.2 mV steps. See .DC statement in the netlist in Table below. DI1N4004 is the manufacturer’s diode model, Da1N4004 is our derived diode model.
SPICE circuit for comparison of manufacturer model (D1), calculated datasheet model (D2), and default model (D3).
We compare the three models in Figure below. and to the datasheet graph data in Table below. VD is the diode voltage versus the diode currents for the manufacturer’s model, our calculated datasheet model and the default diode model. The last column “1N4004 graph” is from the datasheet voltage versus current curve in Figure above which we attempt to match. Comparison of the currents for the three model to the last column shows that the default model is good at low currents, the manufacturer’s model is good at high currents, and our calculated datasheet model is best of all up to 1 A. Agreement is almost perfect at 1 A because the IS calculation is based on diode voltage at 1 A. Our model grossly over states current above 1 A.
First trial of manufacturer model, calculated datasheet model, and default model.
The solution is to increase RS from the default RS=0. Changing RS from 0 to 8m in the datasheet model causes the curve to intersect 10 A (not shown) at the same voltage as the manufacturer’s model. Increasing RS to 28.6m shifts the curve further to the right as shown in Figure below. This has the effect of more closely matching our datasheet model to the datasheet graph (Figure above). Table below shows that the current 1.224470e+01 A at 1.4 V matches the graph at 12 A. However, the current at 0.925 V has degraded from 1.096870e+00 above to 7.318536e-01.
Second trial to improve calculated datasheet model compared with manufacturer model and default model.
Suggested reader exercise: decrease N so that the current at VD=0.925 V is restored to 1 A. This may increase the current (12.2 A) at VD=1.4 V requiring an increase of RS to decrease current to 12 A.
Zener diode: There are two approaches to modeling a zener diode: set the BV parameter to the zener voltage in the model statement, or model the zener with a subcircuit containing a diode clamper set to the zener voltage. An example of the first approach sets the breakdown voltage BV to 15 for the 1n4469 15 V zener diode model (IBV optional):
The second approach models the zener with a subcircuit. Clamper D1 and VZ in Figure below models the 15 V reverse breakdown voltage of a 1N4477A zener diode. Diode DR accounts for the forward conduction of the zener in the subcircuit.
Tunnel diode: A tunnel diode may be modeled by a pair of field effect transistors (JFET) in a SPICE subcircuit. [KHM] An oscillator circuit is also shown in this reference.
Gunn diode: A Gunn diode may also be modeled by a pair of JFET’s. [ISG] This reference shows a microwave relaxation oscillator.
Review
• Diodes are described in SPICE by a diode component statement referring to .model statement. The .model statement contains parameters describing the diode. If parameters are not provided, the model takes on default values.
• Static DC parameters include N, IS, and RS. Reverse breakdown parameters: BV, IBV.
• Accurate dynamic timing requires TT and CJO parameters
• Models provided by the manufacturer are highly recommended. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/03%3A_Diodes_and_Rectifiers/3.13%3A_Other_Diode_Technologies.txt |
This revolution made possible the design and manufacture of lightweight, inexpensive electronic devices that we now take for granted. Understanding how transistors function is of paramount importance to anyone interested in understanding modern electronics.
The Function and Applications for Bipolar Junction Transistors
My intent here is to focus as exclusively as possible on the practical function and application of bipolar transistors, rather than to explore the quantum world of semiconductor theory. Discussions of holes and electrons are better left to another chapter in my opinion. Here I want to explore how to use these components, not analyze their intimate internal details. I don’t mean to downplay the importance of understanding semiconductor physics, but sometimes an intense focus on solid-state physics detracts from understanding these devices’ functions on a component level. In taking this approach, however, I assume that the reader possesses a certain minimum knowledge of semiconductors: the difference between “P” and “N” doped semiconductors, the functional characteristics of a PN (diode) junction, and the meanings of the terms “reverse biased” and “forward biased.” If these concepts are unclear to you, it is best to refer to earlier chapters in this book before proceeding with this one.
BJT Layers
A bipolar transistor consists of a three-layer “sandwich” of doped (extrinsic) semiconductor materials, either P-N-P in the Figure below (b) or N-P-N at (d). Each layer forming the transistor has a specific name, and each layer is provided with a wire contact for connection to a circuit. The schematic symbols are shown in the Figure below (a) and (d).
BJT transistor: (a) PNP schematic symbol, (b) physical layout (c) NPN symbol, (d) layout.
The functional difference between a PNP transistor and an NPN transistor is the proper biasing (polarity) of the junctions when operating. For any given state of operation, the current directions and voltage polarities for each kind of transistor are exactly opposite each other.
Bipolar transistors work as current-controlled current regulators. In other words, transistors restrict the amount of current passed according to a smaller, controlling current. The main current that is controlledgoes from collector to emitter, or from emitter to collector, depending on the type of transistor it is (PNP or NPN, respectively). The small current that controls the main current goes from base to emitter, or from emitter to base, once again depending on the kind of transistor it is (PNP or NPN, respectively). According to the standards of semiconductor symbology, the arrow always points against the direction of electron flow. (Figure below)
Small Base-Emitter current controls large Collector-Emitter current flowing against emitter arrow.
Bipolar Transistors Contain Two Types of Semiconductor Material
Bipolar transistors are called bipolar because the main flow of electrons through them takes place in twotypes of semiconductor material: P and N, as the main current goes from emitter to collector (or vice versa). In other words, two types of charge carriers—electrons and holes—comprise this main current through the transistor.
As you can see, the controlling current and the controlled current always mesh together through the emitter wire, and their electrons always flow against the direction of the transistor’s arrow. This is the first and foremost rule in the use of transistors: all currents must be going in the proper directions for the device to work as a current regulator. The small, controlling current is usually referred to simply as the base current because it is the only current that goes through the base wire of the transistor. Conversely, the large, controlled current is referred to as the collector current because it is the only current that goes through the collector wire. The emitter current is the sum of the base and collector currents, in compliance with Kirchhoff’s Current Law.
No current through the base of the transistor shuts the transistor off like an open switch and prevents current through the collector. A base current turns the transistor on like a closed switch and allows a proportional amount of current through the collector. Collector current is primarily limited by the base current, regardless of the amount of voltage available to push it. The next section will explore in more detail the use of bipolar transistors as switching elements.
Review
• Bipolar transistors are so named because the controlled current must go through two types of semiconductor material: P and N. The current consists of both electron and hole flow, in different parts of the transistor.
• Bipolar transistors consist of either a P-N-P or an N-P-N semiconductor “sandwich” structure.
• The three leads of a bipolar transistor are called the Emitter, Base, and Collector.
• Transistors function as current regulators by allowing a small current to control a larger current. The amount of current allowed between collector and emitter is primarily determined by the amount of current moving between base and emitter.
• In order for a transistor to properly function as a current regulator, the controlling (base) current and the controlled (collector) currents must be going in the proper directions: meshing additively at the emitter and going against the emitter arrow symbol. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.01%3A_Introduction_to_Bipolar_Junction_Transistors_%28BJT%29.txt |
Because a transistor’s collector current is proportionally limited by its base current, it can be used as a sort of current-controlled switch. A relatively small flow of electrons sent through the base of the transistor has the ability to exert control over a much larger flow of electrons through the collector.
Using a BJT as a Switch: An Example
Suppose we had a lamp that we wanted to turn on and off with a switch. Such a circuit would be extremely simple, as in the figure below (a).
For the sake of illustration, let’s insert a transistor in place of the switch to show how it can control the flow of electrons through the lamp. Remember that the controlled current through a transistor must go between collector and emitter.
Since it is the current through the lamp that we want to control, we must position the collector and emitter of our transistor where the two contacts of the switch were. We must also make sure that the lamp’s current will move against the direction of the emitter arrow symbol to ensure that the transistor’s junction bias will be correct as in the figure below (b).
(a) mechanical switch, (b) NPN transistor switch, (c) PNP transistor switch.
A PNP transistor could also have been chosen for the job. Its application is shown in the figure above (c).
The choice between NPN and PNP is really arbitrary. All that matters is that the proper current directions are maintained for the sake of correct junction biasing (electron flow going against the transistor symbol’s arrow).
Going back to the NPN transistor in our example circuit, we are faced with the need to add something more so that we can have base current. Without a connection to the base wire of the transistor, base current will be zero, and the transistor cannot turn on, resulting in a lamp that is always off. Remember that for an NPN transistor, base current must consist of electrons flowing from emitter to base (against the emitter arrow symbol, just like the lamp current).
Perhaps the simplest thing to do would be to connect a switch between the base and collector wires of the transistor as in the figure below (a).
Transistor: (a) cutoff, lamp off; (b) saturated, lamp on.
Cutoff vs Saturated Transistors
If the switch is open as in the figure above (a), the base wire of the transistor will be left “floating” (not connected to anything) and there will be no current through it. In this state, the transistor is said to be cutoff.
If the switch is closed as in the figure above (b), electrons will be able to flow from the emitter through to the base of the transistor, through the switch, up to the left side of the lamp, back to the positive side of the battery. This base current will enable a much larger flow of electrons from the emitter through to the collector, thus lighting up the lamp. In this state of maximum circuit current, the transistor is said to be saturated.
Of course, it may seem pointless to use a transistor in this capacity to control the lamp. After all, we’re still using a switch in the circuit, aren’t we? If we’re still using a switch to control the lamp—if only indirectly—then what’s the point of having a transistor to control the current? Why not just go back to our original circuit and use the switch directly to control the lamp current?
Why Use a Transistor to Control Current?
Two points can be made here, actually. First is the fact that when used in this manner, the switch contacts need only handle what little base current is necessary to turn the transistor on; the transistor itself handles most of the lamp’s current. This may be an important advantage if the switch has a low current rating: a small switch may be used to control a relatively high-current load.
More importantly, the current-controlling behavior of the transistor enables us to use something completely different to turn the lamp on or off. Consider the figure below, where a pair of solar cells provides 1 V to overcome the 0.7 VBE of the transistor to cause base current flow, which in turn controls the lamp.
Solar cell serves as light sensor.
Or, we could use a thermocouple (many connected in series) to provide the necessary base current to turn the transistor on in the figure below.
A single thermocouple provides less than 40 mV. Many in series could produce in excess of the 0.7 V transistor VBE to cause base current flow and consequent collector current to the lamp.
Even a microphone (see the figure below) with enough voltage and current (from an amplifier) output could turn the transistor on, provided its output is rectified from AC to DC so that the emitter-base PN junction within the transistor will always be forward-biased:
Amplified microphone signal is rectified to DC to bias the base of the transistor providing a larger collector current.
The point should be quite apparent by now: Any sufficient source of DC current may be used to turn the transistor on, and that source of current only need be a fraction of the current needed to energize the lamp.
Here we see the transistor functioning not only as a switch, but as a true amplifier: using a relatively low-power signal to control a relatively large amount of power. Please note that the actual power for lighting up the lamp comes from the battery to the right of the schematic. It is not as though the small signal current from the solar cell, thermocouple, or microphone is being magically transformed into a greater amount of power. Rather, those small power sources are simply controlling the battery’s power to light up the lamp.
The BJT as Switch REVIEW:
• Transistors may be used as switching elements to control DC power to a load. The switched (controlled) current goes between emitter and collector; the controlling current goes between emitter and base.
• When a transistor has zero current through it, it is said to be in a state of cutoff (fully nonconducting).
• When a transistor has maximum current through it, it is said to be in a state of saturation (fully conducting). | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.02%3A_The_Bipolar_Junction_Transistor_%28BJT%29_as_a_Switch.txt |
Bipolar transistors are constructed of a three-layer semiconductor “sandwich” either PNP or NPN. As such, transistors register as two diodes connected back-to-back when tested with a multimeter’s “resistance” or “diode check” function as illustrated in Figure below. Low resistance readings on the base with the black negative (-) leads correspond to an N-type material in the base of a PNP transistor. On the symbol, the N-type material is “pointed” to by the arrow of the base-emitter junction, which is the base for this example. The P-type emitter corresponds to the other end of the arrow of the base-emitter junction, the emitter. The collector is very similar to the emitter, and is also a P-type material of the PN junction.
PNP transistor meter check: (a) forward B-E, B-C, resistance is low; (b) reverse B-E, B-C, resistance is ∞.
Here I’m assuming the use of a multimeter with only a single continuity range (resistance) function to check the PN junctions. Some multimeters are equipped with two separate continuity check functions: resistance and “diode check,” each with its own purpose. If your meter has a designated “diode check” function, use that rather than the “resistance” range, and the meter will display the actual forward voltage of the PN junction and not just whether or not it conducts current.
Meter readings will be exactly opposite, of course, for an NPN transistor, with both PN junctions facing the other way. Low resistance readings with the red (+) lead on the base is the “opposite” condition for the NPN transistor.
If a multimeter with a “diode check” function is used in this test, it will be found that the emitter-base junction possesses a slightly greater forward voltage drop than the collector-base junction. This forward voltage difference is due to the disparity in doping concentration between the emitter and collector regions of the transistor: the emitter is a much more heavily doped piece of semiconductor material than the collector, causing its junction with the base to produce a higher forward voltage drop.
Knowing this, it becomes possible to determine which wire is which on an unmarked transistor. This is important because transistor packaging, unfortunately, is not standardized. All bipolar transistors have three wires, of course, but the positions of the three wires on the actual physical package are not arranged in any universal, standardized order.
Suppose a technician finds a bipolar transistor and proceeds to measure continuity with a multimeter set in the “diode check” mode. Measuring between pairs of wires and recording the values displayed by the meter, the technician obtains the data in Figure below.
Unknown bipolar transistor. Which terminals are emitter, base, and collector? Ω-meter readings between terminals.
The only combinations of test points giving conducting meter readings are wires 1 and 3 (red test lead on 1 and black test lead on 3), and wires 2 and 3 (red test lead on 2 and black test lead on 3). These two readings must indicate forward biasing of the emitter-to-base junction (0.655 volts) and the collector-to-base junction (0.621 volts).
Now we look for the one wire common to both sets of conductive readings. It must be the base connection of the transistor, because the base is the only layer of the three-layer device common to both sets of PN junctions (emitter-base and collector-base). In this example, that wire is number 3, being common to both the 1-3 and the 2-3 test point combinations. In both those sets of meter readings, the black (-) meter test lead was touching wire 3, which tells us that the base of this transistor is made of N-type semiconductor material (black = negative). Thus, the transistor is a PNP with base on wire 3, emitter on wire 1 and collector on wire 2 as described in Figure below.
BJT terminals identified by Ω-meter.
Please note that the base wire in this example is not the middle lead of the transistor, as one might expect from the three-layer “sandwich” model of a bipolar transistor. This is quite often the case, and tends to confuse new students of electronics. The only way to be sure which lead is which is by a meter check, or by referencing the manufacturer’s “data sheet” documentation on that particular part number of transistor.
Knowing that a bipolar transistor behaves as two back-to-back diodes when tested with a conductivity meter is helpful for identifying an unknown transistor purely by meter readings. It is also helpful for a quick functional check of the transistor. If the technician were to measure continuity in any more than two or any less than two of the six test lead combinations, he or she would immediately know that the transistor was defective (or else that it wasn’t a bipolar transistor but rather something else—a distinct possibility if no part numbers can be referenced for sure identification!). However, the “two diode” model of the transistor fails to explain how or why it acts as an amplifying device.
To better illustrate this paradox, let’s examine one of the transistor switch circuits using the physical diagram in Figure below rather than the schematic symbol to represent the transistor. This way the two PN junctions will be easier to see.
A small base current flowing in the forward biased base-emitter junction allows a large current flow through the reverse biased base-collector junction.
A grey-colored diagonal arrow shows the direction of electron flow through the emitter-base junction. This part makes sense, since the electrons are flowing from the N-type emitter to the P-type base: the junction is obviously forward-biased. However, the base-collector junction is another matter entirely. Notice how the grey-colored thick arrow is pointing in the direction of electron flow (up-wards) from base to collector. With the base made of P-type material and the collector of N-type material, this direction of electron flow is clearly backwards to the direction normally associated with a PN junction! A normal PN junction wouldn’t permit this “backward” direction of flow, at least not without offering significant opposition. However, a saturated transistor shows very little opposition to electrons, all the way from emitter to collector, as evidenced by the lamp’s illumination!
Clearly then, something is going on here that defies the simple “two-diode” explanatory model of the bipolar transistor. When I was first learning about transistor operation, I tried to construct my own transistor from two back-to-back diodes, as in Figure below.
A pair of back-to-back diodes don’t act like a transistor!
My circuit didn’t work, and I was mystified. However useful the “two diode” description of a transistor might be for testing purposes, it doesn’t explain how a transistor behaves as a controlled switch.
What happens in a transistor is this: the reverse bias of the base-collector junction prevents collector current when the transistor is in cutoff mode (that is, when there is no base current). If the base-emitter junction is forward biased by the controlling signal, the normally-blocking action of the base-collector junction is overridden and current is permitted through the collector, despite the fact that electrons are going the “wrong way” through that PN junction. This action is dependent on the quantum physics of semiconductor junctions, and can only take place when the two junctions are properly spaced and the doping concentrations of the three layers are properly proportioned. Two diodes wired in series fail to meet these criteria; the top diode can never “turn on” when it is reversed biased, no matter how much current goes through the bottom diode in the base wire loop. See Bipolar junction transistors, Ch 2 for more details.
That doping concentrations play a crucial part in the special abilities of the transistor is further evidenced by the fact that collector and emitter are not interchangeable. If the transistor is merely viewed as two back-to-back PN junctions, or merely as a plain N-P-N or P-N-P sandwich of materials, it may seem as though either end of the transistor could serve as collector or emitter. This, however, is not true. If connected “backwards” in a circuit, a base-collector current will fail to control current between collector and emitter. Despite the fact that both the emitter and collector layers of a bipolar transistor are of the same doping type (either N or P), collector and emitter are definitely not identical!
Current through the emitter-base junction allows current through the reverse-biased base-collector junction. The action of base current can be thought of as “opening a gate” for current through the collector. More specifically, any given amount of emitter-to-base current permits a limited amount of base-to-collector current. For every electron that passes through the emitter-base junction and on through the base wire, a certain, number of electrons pass through the base-collector junction and no more.
In the next section, this current-limiting of the transistor will be investigated in more detail.
Review
• Tested with a multimeter in the “resistance” or “diode check” modes, a transistor behaves like two back-to-back PN (diode) junctions.
• The emitter-base PN junction has a slightly greater forward voltage drop than the collector-base PN junction, because of heavier doping of the emitter semiconductor layer.
• The reverse-biased base-collector junction normally blocks any current from going through the transistor between emitter and collector. However, that junction begins to conduct if current is drawn through the base wire. Base current may be thought of as “opening a gate” for a certain, limited amount of current through the collector. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.03%3A_Meter_Check_of_a_Transistor_%28BJT%29.txt |
However, bipolar transistors don’t have to be restricted to these two extreme modes of operation. As we learned in the previous section, base current “opens a gate” for a limited amount of current through the collector. If this limit for the controlled current is greater than zero but less than the maximum allowed by the power supply and load circuit, the transistor will “throttle” the collector current in a mode somewhere between cutoff and saturation. This mode of operation is called the active mode.
An automotive analogy for transistor operation is as follows: cutoff is the condition of no motive force generated by the mechanical parts of the car to make it move. In cutoff mode, the brake is engaged (zero base current), preventing motion (collector current). Active mode is the automobile cruising at a constant, controlled speed (constant, controlled collector current) as dictated by the driver. Saturation the automobile driving up a steep hill that prevents it from going as fast as the driver wishes. In other words, a “saturated” automobile is one with the accelerator pedal pushed all the way down (base current calling for more collector current than can be provided by the power supply/load circuit).
Let’s set up a circuit for SPICE simulation to demonstrate what happens when a transistor is in its active mode of operation. (Figure below)
“Q” is the standard letter designation for a transistor in a schematic diagram, just as “R” is for resistor and “C” is for capacitor. In this circuit, we have an NPN transistor powered by a battery (V1) and controlled by current through a current source (I1). A current source is a device that outputs a specific amount of current, generating as much or as little voltage across its terminals to ensure that exact amount of current through it. Current sources are notoriously difficult to find in nature (unlike voltage sources, which by contrast attempt to maintain a constant voltage, outputting as much or as little current in the fulfillment of that task), but can be simulated with a small collection of electronic components. As we are about to see, transistors themselves tend to mimic the constant-current behavior of a current source in their ability to regulate current at a fixed value.
In the SPICE simulation, we’ll set the current source at a constant value of 20 µA, then vary the voltage source (V1) over a range of 0 to 2 volts and monitor how much current goes through it. The “dummy” battery (Vammeter) in Figure above with its output of 0 volts serves merely to provide SPICE with a circuit element for current measurement.
A Sweeping collector voltage 0 to 2 V with base current constant at 20 µA yields constant 2 mA collector current in the saturation region.
The constant base current of 20 µA sets a collector current limit of 2 mA, exactly 100 times as much. Notice how flat the curve is in (Figure above) for collector current over the range of battery voltage from 0 to 2 volts. The only exception to this featureless plot is at the very beginning, where the battery increases from 0 volts to 0.25 volts. There, the collector current increases rapidly from 0 amps to its limit of 2 mA.
Let’s see what happens if we vary the battery voltage over a wider range, this time from 0 to 50 volts. We’ll keep the base current steady at 20 µA. (Figure below)
Sweeping collector voltage 0 to 50 V with base current constant at 20 µA yields constant 2 mA collector current.
Same result! The collector current in Figure above holds absolutely steady at 2 mA, although the battery (v1) voltage varies all the way from 0 to 50 volts. It would appear from our simulation that collector-to-emitter voltage has little effect over collector current, except at very low levels (just above 0 volts). The transistor is acting as a current regulator, allowing exactly 2 mA through the collector and no more.
Now let’s see what happens if we increase the controlling (I1) current from 20 µA to 75 µA, once again sweeping the battery (V1) voltage from 0 to 50 volts and graphing the collector current in Figure below
weeping collector voltage 0 to 50 V (.dc v1 0 50 2) with base current constant at 75 µA yields constant 7.5 mA collector current. Other curves are generated by current sweep (i1 15u 75u 15u) in DC analysis statement (.dc v1 0 50 2 i1 15u 75u 15u).
Not surprisingly, SPICE gives us a similar plot: a flat line, holding steady this time at 7.5 mA—exactly 100 times the base current—over the range of battery voltages from just above 0 volts to 50 volts. It appears that the base current is the deciding factor for collector current, the V1 battery voltage being irrelevant as long as it is above a certain minimum level.
This voltage/current relationship is entirely different from what we’re used to seeing across a resistor. With a resistor, current increases linearly as the voltage across it increases. Here, with a transistor, current from emitter to collector stays limited at a fixed, maximum value no matter how high the voltage across emitter and collector increases.
Often it is useful to superimpose several collector current/voltage graphs for different base currents on the same graph as in Figure below. A collection of curves like this—one curve plotted for each distinct level of base current—for a particular transistor is called the transistor’s characteristic curves:
Voltage collector to emitter vs collector current for various base currents.
Each curve on the graph reflects the collector current of the transistor, plotted over a range of collector-to-emitter voltages, for a given amount of base current. Since a transistor tends to act as a current regulator, limiting collector current to a proportion set by the base current, it is useful to express this proportion as a standard transistor performance measure. Specifically, the ratio of collector current to base current is known as the Beta ratio (symbolized by the Greek letter β):
Sometimes the β ratio is designated as “hfe,” a label used in a branch of mathematical semiconductor analysis known as “hybrid parameters” which strives to achieve precise predictions of transistor performance with detailed equations. Hybrid parameter variables are many, but each is labeled with the general letter “h” and a specific subscript. The variable “hfe” is just another (standardized) way of expressing the ratio of collector current to base current, and is interchangeable with “β.” The β ratio is unitless.
β for any transistor is determined by its design: it cannot be altered after manufacture. It is rare to have two transistors of the same design exactly match because of the physical variables afecting β . If a circuit design relies on equal β ratios between multiple transistors, “matched sets” of transistors may be purchased at extra cost. However, it is generally considered bad design practice to engineer circuits with such dependencies.
The β of a transistor does not remain stable for all operating conditions. For an actual transistor, the β ratio may vary by a factor of over 3 within its operating current limits. For example, a transistor with advertised β of 50 may actually test with Ic/Ib ratios as low as 30 and as high as 100, depending on the amount of collector current, the transistor’s temperature, and frequency of amplified signal, among other factors. For tutorial purposes it is adequate to assume a constant β for any given transistor; realize that real life is not that simple!
Sometimes it is helpful for comprehension to “model” complex electronic components with a collection of simpler, better-understood components. The model in Figure below is used in many introductory electronics texts.
Elementary diode resistor transistor model.
This model casts the transistor as a combination of diode and rheostat (variable resistor). Current through the base-emitter diode controls the resistance of the collector-emitter rheostat (as implied by the dashed line connecting the two components), thus controlling collector current. An NPN transistor is modeled in the figure shown, but a PNP transistor would be only slightly different (only the base-emitter diode would be reversed). This model succeeds in illustrating the basic concept of transistor amplification: how the base current signal can exert control over the collector current. However, I don’t like this model because it miscommunicates the notion of a set amount of collector-emitter resistance for a given amount of base current. If this were true, the transistor wouldn’t regulate collector current at all like the characteristic curves show. Instead of the collector current curves flattening out after their brief rise as the collector-emitter voltage increases, the collector current would be directly proportional to collector-emitter voltage, rising steadily in a straight line on the graph.
A better transistor model, often seen in more advanced textbooks, is shown in Figure below.
Current source model of transistor.
It casts the transistor as a combination of diode and current source, the output of the current source being set at a multiple (β ratio) of the base current. This model is far more accurate in depicting the true input/output characteristics of a transistor: base current establishes a certain amount of collector current, rather than a certain amount of collector-emitter resistance as the first model implies. Also, this model is favored when performing network analysis on transistor circuits, the current source being a well-understood theoretical component. Unfortunately, using a current source to model the transistor’s current-controlling behavior can be misleading: in no way will the transistor ever act as a source of electrical energy. The current source does not model the fact that its source of energy is a external power supply, similar to an amplifier.
Review
• A transistor is said to be in its active mode if it is operating somewhere between fully on (saturated) and fully off (cutoff).
• Base current regulates collector current. By regulate, we mean that no more collector current can exist than what is allowed by the base current.
• The ratio between collector current and base current is called “Beta” (β) or “hfe”.
• β ratios are different for every transistor, and
• β changes for different operating conditions. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.04%3A_Active-mode_Operation_%28BJT%29.txt |
Transistor as a Simple Switch
One of the simpler transistor amplifier circuits to study previously illustrated the transistor’s switching ability. (Figure below)
NPN transistor as a simple switch.
It is called the common-emitter configuration because (ignoring the power supply battery) both the signal source and the load share the emitter lead as a common connection point shown in Figure below. This is not the only way in which a transistor may be used as an amplifier, as we will see in later sections of this chapter.
Common-emitter amplifier: The input and output signals both share a connection to the emitter.
Before, a small solar cell current saturated a transistor, illuminating a lamp. Knowing now that transistors are able to “throttle” their collector currents according to the amount of base current supplied by an input signal source, we should see that the brightness of the lamp in this circuit is controllable by the solar cell’s light exposure. When there is just a little light shone on the solar cell, the lamp will glow dimly. The lamp’s brightness will steadily increase as more light falls on the solar cell.
Suppose that we were interested in using the solar cell as a light intensity instrument. We want to measure the intensity of incident light with the solar cell by using its output current to drive a meter movement. It is possible to directly connect a meter movement to a solar cell (Figure below) for this purpose. In fact, the simplest light-exposure meters for photography work are designed like this.
High intensity light directly drives light meter.
Although this approach might work for moderate light intensity measurements, it would not work as well for low light intensity measurements. Because the solar cell has to supply the meter movement’s power needs, the system is necessarily limited in its sensitivity. Supposing that our need here is to measure very low-level light intensities, we are pressed to find another solution.
Transistor as an Amplifier
Perhaps the most direct solution to this measurement problem is to use a transistor (Figure below) to amplify the solar cell’s current so that more meter deflection may be obtained for less incident light.
Cell current must be amplified for low intensity light.
Current through the meter movement in this circuit will be β times the solar cell current. With a transistor β of 100, this represents a substantial increase in measurement sensitivity. It is prudent to point out that the additional power to move the meter needle comes from the battery on the far right of the circuit, not the solar cell itself. All the solar cell’s current does is control battery current to the meter to provide a greater meter reading than the solar cell could provide unaided.
Because the transistor is a current-regulating device, and because meter movement indications are based on the current through the movement coil, meter indication in this circuit should depend only on the current from the solar cell, not on the amount of voltage provided by the battery. This means the accuracy of the circuit will be independent of battery condition, a significant feature! All that is required of the battery is a certain minimum voltage and current output ability to drive the meter full-scale.
Voltage Output due to Current Through a Load Resistor
Another way in which the common-emitter configuration may be used is to produce an output voltage derived from the input signal, rather than a specific output current. Let’s replace the meter movement with a plain resistor and measure voltage between collector and emitter in Figure below
Common emitter amplifier develops voltage output due to current through load resistor.
With the solar cell darkened (no current), the transistor will be in cutoff mode and behave as an open switch between collector and emitter. This will produce maximum voltage drop between collector and emitter for maximum Voutput, equal to the full voltage of the battery.
At full power (maximum light exposure), the solar cell will drive the transistor into saturation mode, making it behave like a closed switch between collector and emitter. The result will be minimum voltage drop between collector and emitter, or almost zero output voltage. In actuality, a saturated transistor can never achieve zero voltage drop between collector and emitter because of the two PN junctions through which collector current must travel. However, this “collector-emitter saturation voltage” will be fairly low, around several tenths of a volt, depending on the specific transistor used.
For light exposure levels somewhere between zero and maximum solar cell output, the transistor will be in its active mode, and the output voltage will be somewhere between zero and full battery voltage. An important quality to note here about the common-emitter configuration is that the output voltage is inverted with respect to the input signal. That is, the output voltage decreases as the input signal increases. For this reason, the common-emitter amplifier configuration is referred to as an inverting amplifier.
A quick SPICE simulation (Figure below) of the circuit in Figure below will verify our qualitative conclusions about this amplifier circuit.
Common emitter schematic with node numbers and corresponding SPICE netlist.
Common emitter: collector voltage output vs base current input.
At the beginning of the simulation in Figure above where the current source (solar cell) is outputting zero current, the transistor is in cutoff mode and the full 15 volts from the battery is shown at the amplifier output (between nodes 2 and 0). As the solar cell’s current begins to increase, the output voltage proportionally decreases, until the transistor reaches saturation at 30 µA of base current (3 mA of collector current). Notice how the output voltage trace on the graph is perfectly linear (1 volt steps from 15 volts to 1 volt) until the point of saturation, where it never quite reaches zero. This is the effect mentioned earlier, where a saturated transistor can never achieve exactly zero voltage drop between collector and emitter due to internal junction effects. What we do see is a sharp output voltage decrease from 1 volt to 0.2261 volts as the input current increases from 28 µA to 30 µA, and then a continuing decrease in output voltage from then on (albeit in progressively smaller steps). The lowest the output voltage ever gets in this simulation is 0.1299 volts, asymptotically approaching zero.
Transistor as an AC Amplifier
So far, we’ve seen the transistor used as an amplifier for DC signals. In the solar cell light meter example, we were interested in amplifying the DC output of the solar cell to drive a DC meter movement, or to produce a DC output voltage. However, this is not the only way in which a transistor may be employed as an amplifier. Often an AC amplifier for amplifying alternating current and voltage signals is desired. One common application of this is in audio electronics (radios, televisions, and public-address systems). Earlier, we saw an example of the audio output of a tuning fork activating a transistor switch. (Figure below) Let’s see if we can modify that circuit to send power to a speaker rather than to a lamp in Figure below.
Transistor switch activated by audio.
In the original circuit, a full-wave bridge rectifier was used to convert the microphone’s AC output signal into a DC voltage to drive the input of the transistor. All we cared about here was turning the lamp on with a sound signal from the microphone, and this arrangement sufficed for that purpose. But now we want to actually reproduce the AC signal and drive a speaker. This means we cannot rectify the microphone’s output anymore, because we need undistorted AC signal to drive the transistor! Let’s remove the bridge rectifier and replace the lamp with a speaker:
Common emitter amplifier drives speaker with audio frequency signal.
Since the microphone may produce voltages exceeding the forward voltage drop of the base-emitter PN (diode) junction, I’ve placed a resistor in series with the microphone. Let’s simulate the circuit in Figure below with SPICE. The netlist is included in (Figure below)
SPICE version of common emitter audio amplifier.
Signal clipped at collector due to lack of DC base bias.
The simulation plots (Figure above) both the input voltage (an AC signal of 1.5 volt peak amplitude and 2000 Hz frequency) and the current through the 15 volt battery, which is the same as the current through the speaker. What we see here is a full AC sine wave alternating in both positive and negative directions, and a half-wave output current waveform that only pulses in one direction. If we were actually driving a speaker with this waveform, the sound produced would be horribly distorted.
What’s wrong with the circuit? Why won’t it faithfully reproduce the entire AC waveform from the microphone? The answer to this question is found by close inspection of the transistor diode current source model in Figure below.
The model shows that base current flow in on direction.
Collector current is controlled, or regulated, through the constant-current mechanism according to the pace set by the current through the base-emitter diode. Note that both current paths through the transistor are monodirectional: one way only! Despite our intent to use the transistor to amplify an AC signal, it is essentially a DC device, capable of handling currents in a single direction. We may apply an AC voltage input signal between the base and emitter, but electrons cannot flow in that circuit during the part of the cycle that reverse-biases the base-emitter diode junction. Therefore, the transistor will remain in cutoff mode throughout that portion of the cycle. It will “turn on” in its active mode only when the input voltage is of the correct polarity to forward-bias the base-emitter diode, and only when that voltage is sufficiently high to overcome the diode’s forward voltage drop. Remember that bipolar transistors are current-controlled devices: they regulate collector current based on the existence of base-to-emitter current, not base-to-emitter voltage.
The only way we can get the transistor to reproduce the entire waveform as current through the speaker is to keep the transistor in its active mode the entire time. This means we must maintain current through the base during the entire input waveform cycle. Consequently, the base-emitter diode junction must be kept forward-biased at all times. Fortunately, this can be accomplished with a DC bias voltage added to the input signal. By connecting a sufficient DC voltage in series with the AC signal source, forward-bias can be maintained at all points throughout the wave cycle. (Figure below)
Vbias keeps transistor in the active region.
Undistorted output current I(v(1) due to Vbias
With the bias voltage source of 2.3 volts in place, the transistor remains in its active mode throughout the entire cycle of the wave, faithfully reproducing the waveform at the speaker. (Figure above) Notice that the input voltage (measured between nodes 1 and 0) fluctuates between about 0.8 volts and 3.8 volts, a peak-to-peak voltage of 3 volts just as expected (source voltage = 1.5 volts peak). The output (speaker) current varies between zero and almost 300 mA, 180o out of phase with the input (microphone) signal.
The illustration in Figure below is another view of the same circuit, this time with a few oscilloscopes (“scopemeters”) connected at crucial points to display all the pertinent signals.
Input is biased upward at base. Output is inverted.
Biasing
The need for biasing a transistor amplifier circuit to obtain full waveform reproduction is an important consideration. A separate section of this chapter will be devoted entirely to the subject biasing and biasing techniques. For now, it is enough to understand that biasing may be necessary for proper voltage and current output from the amplifier.
Now that we have a functioning amplifier circuit, we can investigate its voltage, current, and power gains. The generic transistor used in these SPICE analyses has a β of 100, as indicated by the short transistor statistics printout included in the text output in Table below (these statistics were cut from the last two analyses for brevity’s sake).
β is listed under the abbreviation “bf,” which actually stands for “beta, forward”. If we wanted to insert our own β ratio for an analysis, we could have done so on the .model line of the SPICE netlist.
Since β is the ratio of collector current to base current, and we have our load connected in series with the collector terminal of the transistor and our source connected in series with the base, the ratio of output current to input current is equal to beta. Thus, our current gain for this example amplifier is 100, or 40 dB.
Voltage gain is a little more complicated to figure than current gain for this circuit. As always, voltage gain is defined as the ratio of output voltage divided by input voltage. In order to experimentally determine this, we modify our last SPICE analysis to plot output voltage rather than output current so we have two voltage plots to compare in Figure below.
Plotted on the same scale (from 0 to 4 volts), we see that the output waveform in Figure above has a smaller peak-to-peak amplitude than the input waveform , in addition to being at a lower bias voltage, not elevated up from 0 volts like the input. Since voltage gain for an AC amplifier is defined by the ratio of AC amplitudes, we can ignore any DC bias separating the two waveforms. Even so, the input waveform is still larger than the output, which tells us that the voltage gain is less than 1 (a negative dB figure).
To be honest, this low voltage gain is not characteristic to all common-emitter amplifiers. It is a consequence of the great disparity between the input and load resistances. Our input resistance (R1) here is 1000 Ω, while the load (speaker) is only 8 Ω. Because the current gain of this amplifier is determined solely by the β of the transistor, and because that β figure is fixed, the current gain for this amplifier won’t change with variations in either of these resistances. However, voltage gain is dependent on these resistances. If we alter the load resistance, making it a larger value, it will drop a proportionately greater voltage for its range of load currents, resulting in a larger output waveform. Let’s try another simulation, only this time with a 30 Ω in Figure below load instead of an 8 Ω load.
This time the output voltage waveform in Figure above is significantly greater in amplitude than the input waveform. Looking closely, we can see that the output waveform crests between 0 and about 9 volts: approximately 3 times the amplitude of the input voltage.
We can do another computer analysis of this circuit, this time instructing SPICE to analyze it from an AC point of view, giving us peak voltage figures for input and output instead of a time-based plot of the waveforms. (Table below)
Peak voltage measurements of input and output show an input of 1.5 volts and an output of 4.418 volts. This gives us a voltage gain ratio of 2.9453 (4.418 V / 1.5 V), or 9.3827dB.
Because the current gain of the common-emitter amplifier is fixed by β, and since the input and output voltages will be equal to the input and output currents multiplied by their respective resistors, we can derive an equation for approximate voltage gain:
As you can see, the predicted results for voltage gain are quite close to the simulated results. With perfectly linear transistor behavior, the two sets of figures would exactly match. SPICE does a reasonable job of accounting for the many “quirks” of bipolar transistor function in its analysis, hence the slight mismatch in voltage gain based on SPICE’s output.
These voltage gains remain the same regardless of where we measure output voltage in the circuit: across collector and emitter, or across the series load resistor as we did in the last analysis. The amount of output voltage change for any given amount of input voltage will remain the same. Consider the two following SPICE analyses as proof of this. The first simulation in Figure below is time-based, to provide a plot of input and output voltages. You will notice that the two signals are 180o out of phase with each other. The second simulation in Table below is an AC analysis, to provide simple, peak voltage readings for input and output.
We still have a peak output voltage of 4.418 volts with a peak input voltage of 1.5 volts. The only difference from the last set of simulations is the phase of the output voltage.
So far, the example circuits shown in this section have all used NPN transistors. PNP transistors are just as valid to use as NPN in any amplifier configuration, as long as the proper polarity and current directions are maintained, and the common-emitter amplifier is no exception. The output invertion and gain of a PNP transistor amplifier are the same as its NPN counterpart, just the battery polarities are different. (Figure below)
PNP version of common emitter amplifier.
Review
• Common-emitter transistor amplifiers are so-called because the input and output voltage points share the emitter lead of the transistor in common with each other, not considering any power supplies.
• Transistors are essentially DC devices: they cannot directly handle voltages or currents that reverse direction. To make them work for amplifying AC signals, the input signal must be offset with a DC voltage to keep the transistor in its active mode throughout the entire cycle of the wave. This is called biasing.
• If the output voltage is measured between emitter and collector on a common-emitter amplifier, it will be 180o out of phase with the input voltage waveform. Thus, the common-emitter amplifier is called an inverting amplifier circuit.
• The current gain of a common-emitter transistor amplifier with the load connected in series with the collector is equal to β. The voltage gain of a common-emitter transistor amplifier is approximately given here:
• Where “Rout” is the resistor connected in series with the collector and “Rin” is the resistor connected in series with the base. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.05%3A_The_Common-emitter_Amplifier.txt |
Our next transistor configuration to study is a bit simpler for gain calculations. Called the common-collector configuration, its schematic diagram is shown in Figure below.
Common collector amplifier has collector common to both input and output.
It is called the common-collector configuration because (ignoring the power supply battery) both the signal source and the load share the collector lead as a common connection point as in Figure below.
Common collector: Input is applied to base and collector. Output is from emitter-collector circuit.
It should be apparent that the load resistor in the common-collector amplifier circuit receives both the base and collector currents, being placed in series with the emitter. Since the emitter lead of a transistor is the one handling the most current (the sum of base and collector currents, since base and collector currents always mesh together to form the emitter current), it would be reasonable to presume that this amplifier will have a very large current gain. This presumption is indeed correct: the current gain for a common-collector amplifier is quite large, larger than any other transistor amplifier configuration. However, this is not necessarily what sets it apart from other amplifier designs.
Let’s proceed immediately to a SPICE analysis of this amplifier circuit, and you will be able to immediately see what is unique about this amplifier. The circuit is in Figure below. The netlist is in Figure below.
Common collector amplifier for SPICE.
Unlike the common-emitter amplifier from the previous section, the common-collector produces an output voltage in direct rather than inverse proportion to the rising input voltage. See Figure above. As the input voltage increases, so does the output voltage. Moreover, a close examination reveals that the output voltage is nearly identical to the input voltage, lagging behind by about 0.7 volts.
This is the unique quality of the common-collector amplifier: an output voltage that is nearly equal to the input voltage. Examined from the perspective of output voltage change for a given amount of input voltage change, this amplifier has a voltage gain of almost exactly unity (1), or 0 dB. This holds true for transistors of any β value, and for load resistors of any resistance value.
It is simple to understand why the output voltage of a common-collector amplifier is always nearly equal to the input voltage. Referring to the diode current source transistor model in Figure below, we see that the base current must go through the base-emitter PN junction, which is equivalent to a normal rectifying diode. If this junction is forward-biased (the transistor conducting current in either its active or saturated modes), it will have a voltage drop of approximately 0.7 volts, assuming silicon construction. This 0.7 volt drop is largely irrespective of the actual magnitude of base current; thus, we can regard it as being constant:
Emitter follower: Emitter voltage follows base voltage (less a 0.7 V VBE drop)
Given the voltage polarities across the base-emitter PN junction and the load resistor, we see that these must add together to equal the input voltage, in accordance with Kirchhoff’s Voltage Law. In other words, the load voltage will always be about 0.7 volts less than the input voltage for all conditions where the transistor is conducting. Cutoff occurs at input voltages below 0.7 volts, and saturation at input voltages in excess of battery (supply) voltage plus 0.7 volts.
Because of this behavior, the common-collector amplifier circuit is also known as the voltage-follower or emitter-follower amplifier, because the emitter load voltages follow the input so closely.
Applying the common-collector circuit to the amplification of AC signals requires the same input “biasing” used in the common-emitter circuit: a DC voltage must be added to the AC input signal to keep the transistor in its active mode during the entire cycle. When this is done, the result is the non-inverting amplifier in Figure below.
The results of the SPICE simulation in Figure below show that the output follows the input. The output is the same peak-to-peak amplitude as the input. Though, the DC level is shifted downward by one VBE diode drop.
Common collector (emitter-follower): Output V3 follows input V1 less a 0.7 V VBE drop.
Here’s another view of the circuit (Figure below) with oscilloscopes connected to several points of interest.
Common collector non-inverting voltage gain is 1.
Since this amplifier configuration doesn’t provide any voltage gain (in fact, in practice it actually has a voltage gain of slightly less than 1), its only amplifying factor is current. The common-emitter amplifier configuration examined in the previous section had a current gain equal to the β of the transistor, being that the input current went through the base and the output (load) current went through the collector, and β by definition is the ratio between the collector and base currents. In the common-collector configuration, though, the load is situated in series with the emitter, and thus its current is equal to the emitter current. With the emitter carrying collector current and base current, the load in this type of amplifier has all the current of the collector running through it plus the input current of the base. This yields a current gain of β plus 1:
Once again, PNP transistors are just as valid to use in the common-collector configuration as NPN transistors. The gain calculations are all the same, as is the non-inverting of the amplified signal. The only difference is in voltage polarities and current directions shown in Figure below.
PNP version of the common-collector amplifier.
A popular application of the common-collector amplifier is for regulated DC power supplies, where an unregulated (varying) source of DC voltage is clipped at a specified level to supply regulated (steady) voltage to a load. Of course, zener diodes already provide this function of voltage regulation shown in Figure below.
Zener diode voltage regulator.
However, when used in this direct fashion, the amount of current that may be supplied to the load is usually quite limited. In essence, this circuit regulates voltage across the load by keeping current through the series resistor at a high enough level to drop all the excess power source voltage across it, the zener diode drawing more or less current as necessary to keep the voltage across itself steady. For high-current loads, a plain zener diode voltage regulator would have to shunt a heavy current through the diode to be effective at regulating load voltage in the event of large load resistance or voltage source changes.
One popular way to increase the current-handling ability of a regulator circuit like this is to use a common-collector transistor to amplify current to the load, so that the zener diode circuit only has to handle the amount of current necessary to drive the base of the transistor. (Figure below)
Common collector application: voltage regulator.
There’s really only one caveat to this approach: the load voltage will be approximately 0.7 volts less than the zener diode voltage, due to the transistor’s 0.7 volt base-emitter drop. Since this 0.7 volt difference is fairly constant over a wide range of load currents, a zener diode with a 0.7 volt higher rating can be chosen for the application.
Sometimes the high current gain of a single-transistor, common-collector configuration isn’t enough for a particular application. If this is the case, multiple transistors may be staged together in a popular configuration known as a Darlington pair, just an extension of the common-collector concept shown in Figure below.
An NPN darlington pair.
Darlington pairs essentially place one transistor as the common-collector load for another transistor, thus multiplying their individual current gains. Base current through the upper-left transistor is amplified through that transistor’s emitter, which is directly connected to the base of the lower-right transistor, where the current is again amplified. The overall current gain is as follows:
Voltage gain is still nearly equal to 1 if the entire assembly is connected to a load in common-collector fashion, although the load voltage will be a full 1.4 volts less than the input voltage shown in Figure below.
Darlington pair based common-collector amplifier loses two VBE diode drops.
Darlington pairs may be purchased as discrete units (two transistors in the same package), or may be built up from a pair of individual transistors. Of course, if even more current gain is desired than what may be obtained with a pair, Darlington triplet or quadruplet assemblies may be constructed.
Review
• Common-collector transistor amplifiers are so-called because the input and output voltage points share the collector lead of the transistor in common with each other, not considering any power supplies.
• The common-collector amplifier is also known as an emitter-follower.
• The output voltage on a common-collector amplifier will be in phase with the input voltage, making the common-collector a non-inverting amplifier circuit.
• The current gain of a common-collector amplifier is equal to β plus 1. The voltage gain is approximately equal to 1 (in practice, just a little bit less).
• A Darlington pair is a pair of transistors “piggybacked” on one another so that the emitter of one feeds current to the base of the other in common-collector form. The result is an overall current gain equal to the product (multiplication) of their individual common-collector current gains (β plus 1). | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.06%3A_The_Common-collector_Amplifier.txt |
The final transistor amplifier configuration (Figure below) we need to study is the common-base amplifiers. This configuration is more complex than the other two, and is less common due to its strange operating characteristics.
Common-base amplifier
Why is it Called a Common-base Amplifier?
It is called the common-base configuration because (DC power source aside), the signal source and the load share the base of the transistor as a common connection point shown in Figure below.
Common-base amplifier: Input between emitter and base, output between collector and base.
Perhaps the most striking characteristic of this configuration is that the input signal source must carry the full emitter current of the transistor, as indicated by the heavy arrows in the first illustration. As we know, the emitter current is greater than any other current in the transistor, being the sum of base and collector currents. In the last two amplifier configurations, the signal source was connected to the base lead of the transistor, thus handling the least current possible.
Attenuation of Current in Common-base Amplifiers
Because the input current exceeds all other currents in the circuit, including the output current, the current gain of this amplifier is actually less than 1 (notice how Rload is connected to the collector, thus carrying slightly less current than the signal source). In other words, it attenuates current rather than amplifying it. With common-emitter and common-collector amplifier configurations, the transistor parameter most closely associated with gain was β. In the common-base circuit, we follow another basic transistor parameter: the ratio between collector current and emitter current, which is a fraction always less than 1. This fractional value for any transistor is called the alpha ratio, or α ratio.
Boosting Signal Voltage in Common-base Amplifiers
Since it obviously can’t boost signal current, it only seems reasonable to expect it to boost signal voltage. A SPICE simulation of the circuit in Figure below will vindicate that assumption.
Common-base circuit for DC SPICE analysis.
Notice in Figure above that the output voltage goes from practically nothing (cutoff) to 15.75 volts (saturation) with the input voltage being swept over a range of 0.6 volts to 1.2 volts. In fact, the output voltage plot doesn’t show a rise until about 0.7 volts at the input, and cuts off (flattens) at about 1.12 volts input. This represents a rather large voltage gain with an output voltage span of 15.75 volts and an input voltage span of only 0.42 volts: a gain ratio of 37.5, or 31.48 dB. Notice also how the output voltage (measured across Rload) actually exceeds the power supply (15 volts) at saturation, due to the series-aiding effect of the input voltage source.
A second set of SPICE analyses (circuit in Figure below) with an AC signal source (and DC bias voltage) tells the same story: a high voltage gain
Common-base circuit for SPICE AC analysis.
As you can see, the input and output waveforms in Figure below are in phase with each other. This tells us that the common-base amplifier is non-inverting.
The AC SPICE analysis in Table below at a single frequency of 2 kHz provides input and output voltages for gain calculation.
Voltage figures from the second analysis (Table above) show a voltage gain of 42.74 (4.274 V / 0.1 V), or 32.617 dB:
Here’s another view of the circuit in Figure below, summarizing the phase relations and DC offsets of various signals in the circuit just simulated.
Phase relationships and offsets for NPN common base amplifier.
. . . and for a PNP transistor: Figure below.
Phase relationships and offsets for PNP common base amplifier.
Predicting Voltage Gain
Predicting voltage gain for the common-base amplifier configuration is quite difficult, and involves approximations of transistor behavior that are difficult to measure directly. Unlike the other amplifier configurations, where voltage gain was either set by the ratio of two resistors (common-emitter), or fixed at an unchangeable value (common-collector), the voltage gain of the common-base amplifier depends largely on the amount of DC bias on the input signal. As it turns out, the internal transistor resistance between emitter and base plays a major role in determining voltage gain, and this resistance changes with different levels of current through the emitter.
While this phenomenon is difficult to explain, it is rather easy to demonstrate through the use of computer simulation. What I’m going to do here is run several SPICE simulations on a common-base amplifier circuit (Figure previous), changing the DC bias voltage slightly (vbias in Figure below ) while keeping the AC signal amplitude and all other circuit parameters constant. As the voltage gain changes from one simulation to another, different output voltage amplitudes will be noted.
Although these analyses will all be conducted in the “transfer function” mode, each was first “proofed” in the transient analysis mode (voltage plotted over time) to ensure that the entire wave was being faithfully reproduced and not “clipped” due to improper biasing. See “*.tran 0.02m 0.78m” in Figure below, the “commented out” transient analysis statement. Gain calculations cannot be based on waveforms that are distorted. SPICE can calculate the small signal DC gain for us with the “.tf v(4) vin” statement. The output is v(4) and the input as vin.
SPICE net list: Common-base, transfer function (voltage gain) for various DC bias voltages. SPICE net list: Common-base amp current gain; Note .tf v(4) vin statement. Transfer function for DC current gain I(vin)/Iin; Note .tf I(vin) Iin statement.
At the command line, spice -b filename.cir produces a printed output due to the .tf statement: transfer_function, output_impedance, and input_impedance. The abbreviated output listing is from runs with vbias at 0.85, 0.90, 0.95, 1.00 V as recorded in Table below.
A trend should be evident in Table above. With increases in DC bias voltage, voltage gain (transfer_function) increases as well. We can see that the voltage gain is increasing because each subsequent simulation (vbias= 0.85, 0.8753, 0.90, 0.95, 1.00 V) produces greater gain (transfer_function= 37.6, 39.4 40.8, 42.7, 44.0), respectively. The changes are largely due to minuscule variations in bias voltage.
The last three lines of Table above(right) show the I(v1)/Iin current gain of 0.99. (The last two lines look invalid.) This makes sense for β=100; α= β/(β+1), α=0.99=100/(100-1). The combination of low current gain (always less than 1) and somewhat unpredictable voltage gain conspire against the common-base design, relegating it to few practical applications.
Those few applications include radio frequency amplifiers. The grounded base helps shield the input at the emitter from the collector output, preventing instability in RF amplifiers. The common base configuration is usable at higher frequencies than common emitter or common collector. See “Class C common-base 750 mW RF power amplifier” Ch 9 . For a more elaborate circuit see “Class A common-base small-signal high gain amplifier”Ch 9 .
Review
• Common-base transistor amplifiers are so-called because the input and output voltage points share the base lead of the transistor in common with each other, not considering any power supplies.
• The current gain of a common-base amplifier is always less than 1. The voltage gain is a function of input and output resistances, and also the internal resistance of the emitter-base junction, which is subject to change with variations in DC bias voltage. Suffice to say that the voltage gain of a common-base amplifier can be very high.
• The ratio of a transistor’s collector current to emitter current is called α. The α value for any transistor is always less than unity, or in other words, less than 1. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.07%3A_The_Common-base_Amplifier.txt |
While the C-B (common-base) amplifier is known for wider bandwidth than the C-E (common-emitter) configuration, the low input impedance (10s of Ω) of C-B is a limitation for many applications. The solution is to precede the C-B stage by a low gain C-E stage which has moderately high input impedance (kΩs). See Figure below. The stages are in a cascode configuration, stacked in series, as opposed to cascaded for a standard amplifier chain. See “Capacitor coupled three stage common-emitter amplifier” Capacitor coupledfor a cascade example. The cascode amplifier configuration has both wide bandwidth and a moderately high input impedance.
The cascode amplifier is combined common-emitter and common-base. This is an AC circuit equivalent with batteries and capacitors replaced by short circuits.
Bandwidth Capacitance and the Miller Effect
The key to understanding the wide bandwidth of the cascode configuration is the Miller effect. The Miller effect is the multiplication of the bandwidth robbing collector-base capacitance by voltage gain Av. This C-B capacitance is smaller than the E-B capacitance. Thus, one would think that the C-B capacitance would have little effect. However, in the C-E configuration, the collector output signal is out of phase with the input at the base. The collector signal capacitively coupled back opposes the base signal. Moreover, the collector feedback is (1-Av) times larger than the base signal. Keep in mind that Av is a negative number for the inverting C-E amplifier. Thus, the small C-B capacitance appears (1+A|v|) times larger than its actual value. This capacitive gain reducing feedback increases with frequency, reducing the high frequency response of a C-E amplifier.
The approximate voltage gain of the C-E amplifier in Figure below is -RL/rEE. The emitter current is set to 1.0 mA by biasing. REE= 26mV/IE = 26mV/1.0ma = 26 Ω. Thus, Av = -RL/REE = -4700/26 = -181. The pn2222 datasheet list Ccbo = 8 pF.[FAR] The miller capacitance is Ccbo(1-Av). Gain Av = -181, negative since it is inverting gain. Cmiller = Ccbo(1-Av) = 8pF(1-(-181)=1456pF
A common-base configuration is not subject to the Miller effect because the grounded base shields the collector signal from being fed back to the emitter input. Thus, a C-B amplifier has better high frequency response. To have a moderately high input impedance, the C-E stage is still desirable. The key is to reduce the gain (to about 1) of the C-E stage which reduces the Miller effect C-B feedback to 1·CCBO. The total C-B feedback is the feedback capacitance 1·CCB plus the actual capacitance CCB for a total of 2·CCBO. This is a considerable reduction from 181·CCBO. The miller capacitance for a gain of -2 C-E stage is Cmiller = Ccbo(1-Av)= Cmiller = Ccbo(1-(-1)) = Ccbo·2.
The way to reduce the common-emitter gain is to reduce the load resistance. The gain of a C-E amplifier is approximately RC/RE. The internal emitter resistance rEE at 1mA emitter current is 26Ω. For details on the 26Ω, see “Derivation of REE”, see REE. The collector load RC is the resistance of the emitter of the C-B stage loading the C-E stage, 26Ω again. CE gain amplifier gain is approximately Av = RC/RE=26/26=1. This Miller capacitance is Cmiller = Ccbo(1-Av) = 8pF(1-(-1)=16pF. We now have a moderately high input impedance C-E stage without suffering the Miller effect, but no C-E dB voltage gain. The C-B stage provides a high voltage gain, AV = -181. Current gain of cascode is β of the C-E stage, 1 for the C-B, β overall. Thus, the cascode has moderately high input impedance of the C-E, good gain, and good bandwidth of the C-B.
SPICE: Cascode and common-emitter for comparison.
Cascode Vs. Common-Emitter Amplifier Comparison
The SPICE version of both a cascode amplifier, and for comparison, a common-emitter amplifier is shown in Figure above. The netlist is in Table below. The AC source V3 drives both amplifiers via node 4. The bias resistors for this circuit are calculated in an example problem cascode.
SPICE waveforms. Note that Input is multiplied by 10 for visibility.
The waveforms in Figure above show the operation of the cascode stage. The input signal is displayed multiplied by 10 so that it may be shown with the outputs. Note that both the Cascode, Common-emitter, and Va (intermediate point) outputs are inverted from the input. Both the Cascode and Common emitter have large amplitude outputs. The Va point has a DC level of about 10V, about half way between 20V and ground. The signal is larger than can be accounted for by a C-E gain of 1, It is three times larger than expected.
Cascode vs common-emitter banwidth.
Figure above shows the frequency response to both the cascode and common-emitter amplifiers. The SPICE statements responsible for the AC analysis, extracted from the listing:
Note the “ac 1” is necessary at the end of the V3 statement. The cascode has marginally better mid-band gain. However, we are primarily looking for the bandwidth measured at the -3dB points, down from the midband gain for each amplifier. This is shown by the vertical solid lines in Figure above. It is also possible to print the data of interest from nutmeg to the screen, the SPICE graphical viewer (command, first line):
Index 22 gives the midband dB gain for Cascode vm(3)=47.5dB and Common-emitter vm(13)=45.4dB. Out of many printed lines, Index 33 was the closest to being 3dB down from 45.4dB at 42.0dB for the Common-emitter circuit. The corresponding Index 33 frequency is approximately 2Mhz, the common-emitter bandwidth. Index 37 vm(3)=44.6db is approximately 3db down from 47.5db. The corresponding Index37 frequency is 5Mhz, the cascode bandwidth. Thus, the cascode amplifier has a wider bandwidth. We are not concerned with the low frequency degradation of gain. It is due to the capacitors, which could be remedied with larger ones. The 5MHz bandwith of our cascode example, while better than the common-emitter example, is not exemplary for an RF (radio frequency) amplifier. A pair of RF or microwave transistors with lower interelectrode capacitances should be used for higher bandwidth. Before the invention of the RF dual gate MOSFET, the BJT cascode amplifier could have been found in UHF (ultra high frequency) TV tuners.
REVIEW
• A cascode amplifier consists of a common-emitter stage loaded by the emitter of a common-base stage.
• The heavily loaded C-E stage has a low gain of 1, overcoming the Miller effect
• A cascode amplifier has a high gain, moderately high input impedance, a high output impedance, and a high bandwidth. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.08%3A_The_Cascode_Amplifier.txt |
In the common-emitter section of this chapter, we saw a SPICE analysis where the output waveform resembled a half-wave rectified shape: only half of the input waveform was reproduced, with the other half being completely cut off. Since our purpose at that time was to reproduce the entire waveshape, this constituted a problem. The solution to this problem was to add a small bias voltage to the amplifier input so that the transistor stayed in active mode throughout the entire wave cycle. This addition was called a bias voltage.
A half-wave output is not problematic for some applications. In fact, some applications may necessitate this very kind of amplification. Because it is possible to operate an amplifier in modes other than full-wave reproduction and specific applications require different ranges of reproduction, it is useful to describe the degree to which an amplifier reproduces the input waveform by designating it according to class. Amplifier class operation is categorized with alphabetical letters: A, B, C, and AB.
For Class A operation, the entire input waveform is faithfully reproduced. Although I didn’t introduce this concept back in the common-emitter section, this is what we were hoping to attain in our simulations. Class A operation can only be obtained when the transistor spends its entire time in the active mode, never reaching either cutoff or saturation. To achieve this, sufficient DC bias voltage is usually set at the level necessary to drive the transistor exactly halfway between cutoff and saturation. This way, the AC input signal will be perfectly “centered” between the amplifier’s high and low signal limit levels.
Class A: The amplifier output is a faithful reproduction of the input.
Class B operation is what we had the first time an AC signal was applied to the common-emitter amplifier with no DC bias voltage. The transistor spent half its time in active mode and the other half in cutoff with the input voltage too low (or even of the wrong polarity!) to forward-bias its base-emitter junction.
Class B: Bias is such that half (180o) of the waveform is reproduced.
By itself, an amplifier operating in class B mode is not very useful. In most circumstances, the severe distortion introduced into the waveshape by eliminating half of it would be unacceptable. However, class B operation is a useful mode of biasing if two amplifiers are operated as a push-pull pair, each amplifier handling only half of the waveform at a time:
Class B push pull amplifier: Each transistor reproduces half of the waveform. Combining the halves produces a faithful reproduction of the whole wave.
Transistor Q1 “pushes” (drives the output voltage in a positive direction with respect to ground), while transistor Q2 “pulls” the output voltage (in a negative direction, toward 0 volts with respect to ground). Individually, each of these transistors is operating in class B mode, active only for one-half of the input waveform cycle. Together, however, both function as a team to produce an output waveform identical in shape to the input waveform.
A decided advantage of the class B (push-pull) amplifier design over the class A design is greater output power capability. With a class A design, the transistor dissipates considerable energy in the form of heat because it never stops conducting current. At all points in the wave cycle it is in the active (conducting) mode, conducting substantial current and dropping substantial voltage. There is substantial power dissipated by the transistor throughout the cycle. In a class B design, each transistor spends half the time in cutoff mode, where it dissipates zero power (zero current = zero power dissipation). This gives each transistor a time to “rest” and cool while the other transistor carries the burden of the load. Class A amplifiers are simpler in design, but tend to be limited to low-power signal applications for the simple reason of transistor heat dissipation.
Another class of amplifier operation known as class AB, is somewhere between class A and class B: the transistor spends more than 50% but less than 100% of the time conducting current.
If the input signal bias for an amplifier is slightly negative (opposite of the bias polarity for class A operation), the output waveform will be further “clipped” than it was with class B biasing, resulting in an operation where the transistor spends most of the time in cutoff mode:
Class C: Conduction is for less than a half cycle (< 180o).
At first, this scheme may seem utterly pointless. After all, how useful could an amplifier be if it clips the waveform as badly as this? If the output is used directly with no conditioning of any kind, it would indeed be of questionable utility. However, with the application of a tank circuit (parallel resonant inductor-capacitor combination) to the output, the occasional output surge produced by the amplifier can set in motion a higher-frequency oscillation maintained by the tank circuit. This may be likened to a machine where a heavy flywheel is given an occasional “kick” to keep it spinning:
Class C amplifier driving a resonant circuit.
Called class C operation, this scheme also enjoys high power efficiency due to the fact that the transistor(s) spend the vast majority of time in the cutoff mode, where they dissipate zero power. The rate of output waveform decay (decreasing oscillation amplitude between “kicks” from the amplifier) is exaggerated here for the benefit of illustration. Because of the tuned tank circuit on the output, this circuit is usable only for amplifying signals of definite, fixed amplitude. A class C amplifier may used in an FM (frequency modulation) radio transmitter. However, the class C amplifier may not directly amplify an AM (amplitude modulated) signal due to distortion.
Another kind of amplifier operation, significantly different from Class A, B, AB, or C, is called Class D. It is not obtained by applying a specific measure of bias voltage as are the other classes of operation, but requires a radical re-design of the amplifier circuit itself. It is a little too early in this chapter to investigate exactly how a class D amplifier is built, but not too early to discuss its basic principle of operation.
A class D amplifier reproduces the profile of the input voltage waveform by generating a rapidly-pulsing square wave output. The duty cycle of this output waveform (time “on” versus total cycle time) varies with the instantaneous amplitude of the input signal. The plots in (Figure below demonstrate this principle.)
Class D amplifier: Input signal and unfiltered output.
The greater the instantaneous voltage of the input signal, the greater the duty cycle of the output squarewave pulse. If there can be any goal stated of the class D design, it is to avoid active-mode transistor operation. Since the output transistor of a class D amplifier is never in the active mode, only cutoff or saturated, there will be little heat energy dissipated by it. This results in very high power efficiency for the amplifier. Of course, the disadvantage of this strategy is the overwhelming presence of harmonics on the output. Fortunately, since these harmonic frequencies are typically much greater than the frequency of the input signal, these can be filtered out by a low-pass filter with relative ease, resulting in an output more closely resembling the original input signal waveform. Class D technology is typically seen where extremely high power levels and relatively low frequencies are encountered, such as in industrial inverters (devices converting DC into AC power to run motors and other large devices) and high-performance audio amplifiers.
A term you will likely come across in your studies of electronics is something called quiescent, which is a modifier designating the zero input condition of a circuit. Quiescent current, for example, is the amount of current in a circuit with zero input signal voltage applied. Bias voltage in a transistor circuit forces the transistor to operate at a different level of collector current with zero input signal voltage than it would without that bias voltage. Therefore, the amount of bias in an amplifier circuit determines its quiescent values.
In a class A amplifier, the quiescent current should be exactly half of its saturation value (halfway between saturation and cutoff, cutoff by definition being zero). Class B and class C amplifiers have quiescent current values of zero, since these are supposed to be cutoff with no signal applied. Class AB amplifiers have very low quiescent current values, just above cutoff. To illustrate this graphically, a “load line” is sometimes plotted over a transistor’s characteristic curves to illustrate its range of operation while connected to a load resistance of specific value shown in Figure below.
Example load line drawn over transistor characteristic curves from Vsupply to saturation current.
A load line is a plot of collector-to-emitter voltage over a range of collector currents. At the lower-right corner of the load line, voltage is at maximum and current is at zero, representing a condition of cutoff. At the upper-left corner of the line, voltage is at zero while current is at a maximum, representing a condition of saturation. Dots marking where the load line intersects the various transistor curves represent realistic operating conditions for those base currents given.
Quiescent operating conditions may be shown on this graph in the form of a single dot along the load line. For a class A amplifier, the quiescent point will be in the middle of the load line as in (Figure below).
Quiescent point (dot) for class A.
In this illustration, the quiescent point happens to fall on the curve representing a base current of 40 µA. If we were to change the load resistance in this circuit to a greater value, it would affect the slope of the load line, since a greater load resistance would limit the maximum collector current at saturation, but would not change the collector-emitter voltage at cutoff. Graphically, the result is a load line with a different upper-left point and the same lower-right point as in (Figure below)
Load line resulting from increased load resistance.
Note how the new load line doesn’t intercept the 75 µA curve along its flat portion as before. This is very important to realize because the non-horizontal portion of a characteristic curve represents a condition of saturation. Having the load line intercept the 75 µA curve outside of the curve’s horizontal range means that the amplifier will be saturated at that amount of base current. Increasing the load resistor value is what caused the load line to intercept the 75 µA curve at this new point, and it indicates that saturation will occur at a lesser value of base current than before.
With the old, lower-value load resistor in the circuit, a base current of 75 µA would yield a proportional collector current (base current multiplied by β). In the first load line graph, a base current of 75 µA gave a collector current almost twice what was obtained at 40 µA, as the β ratio would predict. However, collector current increases marginally between base currents 75 µA and 40 µA, because the transistor begins to lose sufficient collector-emitter voltage to continue to regulate collector current.
To maintain linear (no-distortion) operation, transistor amplifiers shouldn’t be operated at points where the transistor will saturate; that is, where the load line will not potentially fall on the horizontal portion of a collector current curve. We’d have to add a few more curves to the graph in Figure below before we could tell just how far we could “push” this transistor with increased base currents before it saturates.
More base current curves shows saturation detail.
It appears in this graph that the highest-current point on the load line falling on the straight portion of a curve is the point on the 50 µA curve. This new point should be considered the maximum allowable input signal level for class A operation. Also for class A operation, the bias should be set so that the quiescent point is halfway between this new maximum point and cutoff shown in Figure below.
New quiescent point avoids saturation region
Now that we know a little more about the consequences of different DC bias voltage levels, it is time to investigate practical biasing techniques. So far, I’ve shown a small DC voltage source (battery) connected in series with the AC input signal to bias the amplifier for whatever desired class of operation. In real life, the connection of a precisely-calibrated battery to the input of an amplifier is simply not practical. Even if it were possible to customize a battery to produce just the right amount of voltage for any given bias requirement, that battery would not remain at its manufactured voltage indefinitely. Once it started to discharge and its output voltage drooped, the amplifier would begin to drift toward class B operation.
Take this circuit, illustrated in the common-emitter section for a SPICE simulation, for instance, in Figure below.
Impractical base battery bias.
That 2.3 volt “Vbias” battery would not be practical to include in a real amplifier circuit. A far more practical method of obtaining bias voltage for this amplifier would be to develop the necessary 2.3 volts using a voltage divider network connected across the 15 volt battery. After all, the 15 volt battery is already there by necessity, and voltage divider circuits are easy to design and build. Let’s see how this might look in Figure below.
Voltage divider bias.
If we choose a pair of resistor values for R2 and R3 that will produce 2.3 volts across R3 from a total of 15 volts (such as 8466 Ω for R2 and 1533 Ω for R3), we should have our desired value of 2.3 volts between base and emitter for biasing with no signal input. The only problem is, this circuit configuration places the AC input signal source directly in parallel with R3 of our voltage divider. This is not acceptable, as the AC source will tend to overpower any DC voltage dropped across R3. Parallel components must have the same voltage, so if an AC voltage source is directly connected across one resistor of a DC voltage divider, the AC source will “win” and there will be no DC bias voltage added to the signal.
One way to make this scheme work, although it may not be obvious why it will work, is to place a coupling capacitor between the AC voltage source and the voltage divider as in Figure below.
Coupling capacitor prevents voltage divider bias from flowing into signal generator.
The capacitor forms a high-pass filter between the AC source and the DC voltage divider, passing almost all of the AC signal voltage on to the transistor while blocking all DC voltage from being shorted through the AC signal source. This makes much more sense if you understand the superposition theorem and how it works. According to superposition, any linear, bilateral circuit can be analyzed in a piecemeal fashion by only considering one power source at a time, then algebraically adding the effects of all power sources to find the final result. If we were to separate the capacitor and R2—R3 voltage divider circuit from the rest of the amplifier, it might be easier to understand how this superposition of AC and DC would work.
With only the AC signal source in effect, and a capacitor with an arbitrarily low impedance at signal frequency, almost all the AC voltage appears across R3:
Due to the coupling capacitor’s very low impedance at the signal frequency, it behaves much like a piece of wire, thus can be omitted for this step in superposition analysis.
With only the DC source in effect, the capacitor appears to be an open circuit, and thus neither it nor the shorted AC signal source will have any effect on the operation of the R2—R3 voltage divider in Figure below.
The capacitor appears to be an open circuit as far at the DC analysis is concerned
Combining these two separate analyses in Figure below, we get a superposition of (almost) 1.5 volts AC and 2.3 volts DC, ready to be connected to the base of the transistor.
Combined AC and DC circuit.
Enough talk—its about time for a SPICE simulation of the whole amplifier circuit in Figure below. We will use a capacitor value of 100 µF to obtain an arbitrarily low (0.796 Ω) impedance at 2000 Hz:
Note the substantial distortion in the output waveform in Figure above. The sine wave is being clipped during most of the input signal’s negative half-cycle. This tells us the transistor is entering into cutoff mode when it shouldn’t (I’m assuming a goal of class A operation as before). Why is this? This new biasing technique should give us exactly the same amount of DC bias voltage as before, right?
With the capacitor and R2—R3 resistor network unloaded, it will provide exactly 2.3 volts worth of DC bias. However, once we connect this network to the transistor, it is no longer unloaded. Current drawn through the base of the transistor will load the voltage divider, thus reducing the DC bias voltage available for the transistor. Using the diode current source transistor model in Figure below to illustrate, the bias problem becomes evident.
Diode transistor model shows loading of voltage divider.
A voltage divider’s output depends not only on the size of its constituent resistors, but also on how much current is being divided away from it through a load. The base-emitter PN junction of the transistor is a load that decreases the DC voltage dropped across R3, due to the fact that the bias current joins with R3‘s current to go through R2, upsetting the divider ratio formerly set by the resistance values of R2 and R3. To obtain a DC bias voltage of 2.3 volts, the values of R2 and/or R3 must be adjusted to compensate for the effect of base current loading. To increase the DC voltage dropped across R3, lower the value of R2, raise the value of R3, or both.
The new resistor values of 6 kΩ and 4 kΩ (R2 and R3, respectively) in Figure above results in class A waveform reproduction, just the way we wanted.
Review
• Class A operation is an amplifier biased to be in the active mode throughout the entire waveform cycle, thus faithfully reproducing the whole waveform.
• Class B operation is an amplifier biased so that only half of the input waveform gets reproduced: either the positive half or the negative half. The transistor spends half its time in the active mode and half its time cutoff. Complementary pairs of transistors running in class B operation are often used to deliver high power amplification in audio signal systems, each transistor of the pair handling a separate half of the waveform cycle. Class B operation delivers better power efficiency than a class A amplifier of similar output power.
• Class AB operation is an amplifier is biased at a point somewhere between class A and class B.
• Class C is an amplifier biased to amplify only a small portion of the waveform. Most of the transistor’s time is spent in cutoff mode. In order for there to be a complete waveform at the output, a resonant tank circuit is often used as a “flywheel” to maintain oscillations for a few cycles after each “kick” from the amplifier. Because the transistor is not conducting most of the time, power efficiencies are high for a class C amplifier.
• Class D operation requires an advanced circuit design, and functions on the principle of representing instantaneous input signal amplitude by the duty cycle of a high-frequency squarewave. The output transistor(s) never operate in active mode, only cutoff and saturation. Little heat energy dissipated makes energy efficiency high.
• DC bias voltage on the input signal, necessary for certain classes of operation (especially class A and class C), may be obtained through the use of a voltage divider and coupling capacitor rather than a battery connected in series with the AC signal source. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.09%3A_Biasing_Techniques_%28BJT%29.txt |
Although transistor switching circuits operate without bias, it is unusual for analog circuits to operate without bias. One of the few examples is “TR One, one transistor radio” TR One, Ch 9 with an amplified AM (amplitude modulation) detector. Note the lack of a bias resistor at the base in that circuit. In this section we look at a few basic bias circuits which can set a selected emitter current IE. Given a desired emitter current IE, what values of bias resistors are required, RB, RE, etc?
Base Bias Resistor
The simplest biasing applies a base-bias resistor between the base and a base battery VBB. It is convenient to use the existing VCC supply instead of a new bias supply. An example of an audio amplifier stage using base-biasing is “Crystal radio with one transistor . . . ” crystal radio, Ch 9 . Note the resistor from the base to the battery terminal. A similar circuit is shown in Figure below.
Write a KVL (Krichhoff’s voltage law) equation about the loop containing the battery, RB, and the VBE diode drop on the transistor in Figure below. Note that we use VBB for the base supply, even though it is actually VCC. If β is large we can make the approximation that IC =IE. For silicon transistors VBE≅0.7V.
Base-bias
Silicon small signal transistors typically have a β in the range of 100-300. Assuming that we have a β=100 transistor, what value of base-bias resistor is required to yield an emitter current of 1mA?
Solving the IE base-bias equation for RB and substituting β, VBB, VBE, and IE yields 930kΩ. The closest standard value is 910kΩ.
What is the emitter current with a 910kΩ resistor? What is the emitter current if we randomly get a β=300 transistor?
The emitter current is little changed in using the standard value 910kΩ resistor. However, with a change in β from 100 to 300, the emitter current has tripled. This is not acceptable in a power amplifier if we expect the collector voltage to swing from near VCC to near ground. However, for low level signals from micro-volts to a about a volt, the bias point can be centered for a β of square root of (100·300)=173. The bias point will still drift by a considerable amount . However, low level signals will not be clipped.
Base-bias by its self is not suitable for high emitter currents, as used in power amplifiers. The base-biased emitter current is not temperature stable. Thermal run away is the result of high emitter current causing a temperature increase which causes an increase in emitter current, which further increases temperature.
Collector-Feedback Bias
Variations in bias due to temperature and beta may be reduced by moving the VBB end of the base-bias resistor to the collector as in Figure below. If the emitter current were to increase, the voltage drop across RC increases, decreasing VC, decreasing IB fed back to the base. This, in turn, decreases the emitter current, correcting the original increase.
Write a KVL equation about the loop containing the battery, RC , RB , and the VBE drop. Substitute IC≅IE and IB≅IE/β. Solving for IE yields the IE CFB-bias equation. Solving for IB yields the IB CFB-bias equation.
Collector-feedback bias.
Find the required collector feedback bias resistor for an emitter current of 1 mA, a 4.7K collector load resistor, and a transistor with β=100 . Find the collector voltage VC. It should be approximately midway between VCC and ground.
The closest standard value to the 460k collector feedback bias resistor is 470k. Find the emitter current IEwith the 470 K resistor. Recalculate the emitter current for a transistor with β=100 and β=300.
We see that as beta changes from 100 to 300, the emitter current increases from 0.989mA to 1.48mA. This is an improvement over the previous base-bias circuit which had an increase from 1.02mA to 3.07mA. Collector feedback bias is twice as stable as base-bias with respect to beta variation.
Emitter-Bias
Inserting a resistor RE in the emitter circuit as in Figure below causes degeneration, also known as negative feedback. This opposes a change in emitter current IE due to temperature changes, resistor tolerances, beta variation, or power supply tolerance. Typical tolerances are as follows: resistor— 5%, beta— 100-300, power supply— 5%. Why might the emitter resistor stabilize a change in current? The polarity of the voltage drop across RE is due to the collector battery VCC. The end of the resistor closest to the (-) battery terminal is (-), the end closest to the (+) terminal it (+). Note that the (-) end of RE is connected via VBB battery and RB to the base. Any increase in current flow through RE will increase the magnitude of negative voltage applied to the base circuit, decreasing the base current, decreasing the emitter current. This decreasing emitter current partially compensates the original increase.
Emitter-bias
Note that base-bias battery VBB is used instead of VCC to bias the base in Figure above. Later we will show that the emitter-bias is more effective with a lower base bias battery. Meanwhile, we write the KVL equation for the loop through the base-emitter circuit, paying attention to the polarities on the components. We substitute IB≅IE/β and solve for emitter current IE. This equation can be solved for RB , equation: RB emitter-bias, Figure above.
Before applying the equations: RB emitter-bias and IE emitter-bias, Figure above, we need to choose values for RC and RE . RC is related to the collector supply VCC and the desired collector current IC which we assume is approximately the emitter current IE. Normally the bias point for VC is set to half of VCC. Though, it could be set higher to compensate for the voltage drop across the emitter resistor RE. The collector current is whatever we require or choose. It could range from micro-Amps to Amps depending on the application and transistor rating. We choose IC = 1mA, typical of a small-signal transistor circuit. We calculate a value for RC and choose a close standard value. An emitter resistor which is 10-50% of the collector load resistor usually works well.
Our first example sets the base-bias supply to high at VBB = VCC = 10V to show why a lower voltage is desirable. Determine the required value of base-bias resistor RB. Choose a standard value resistor. Calculate the emitter current for β=100 and β=300. Compare the stabilization of the current to prior bias circuits.
An 883k resistor was calculated for RB, an 870k chosen. At β=100, IE is 1.01mA.
For β=300 the emitter currents are shown in Table below.
Table above shows that for VBB = 10V, emitter-bias does not do a very good job of stabilizing the emitter current. The emitter-bias example is better than the previous base-bias example, but, not by much. The key to effective emitter bias is lowering the base supply VBB nearer to the amount of emitter bias.
How much emitter bias do we Have? Rounding, that is emitter current times emitter resistor: IERE = (1mA)(470) = 0.47V. In addition, we need to overcome the VBE = 0.7V. Thus, we need a VBB >(0.47 + 0.7)V or >1.17V. If emitter current deviates, this number will change compared with the fixed base supply VBB,causing a correction to base current IB and emitter current IE. A good value for VB >1.17V is 2V.
The calculated base resistor of 83k is much lower than the previous 883k. We choose 82k from the list of standard values. The emitter currents with the 82k RB for β=100 and β=300 are:
Comparing the emitter currents for emitter-bias with VBB = 2V at β=100 and β=300 to the previous bias circuit examples in Table below, we see considerable improvement at 1.75mA, though, not as good as the 1.48mA of collector feedback.
How can we improve the performance of emitter-bias? Either increase the emitter resistor RE or decrease the base-bias supply VBB or both. As an example, we double the emitter resistor to the nearest standard value of 910Ω.
The calculated RB = 39k is a standard value resistor. No need to recalculate IE for β = 100. For β = 300, it is:
The performance of the emitter-bias circuit with a 910 emitter resistor is much improved. See Table below.
As an exercise, rework the emitter-bias example with the emitter resistor reverted back to 470Ω, and the base-bias supply reduced to 1.5V.
The 33k base resistor is a standard value, emitter current at β = 100 is OK. The emitter current at β = 300 is:
Table below below compares the exercise results 1mA and 1.38mA to the previous examples.
The emitter-bias equations have been repeated in Figure below with the internal emitter resistance included for better accuracy. The internal emitter resistance is the resistance in the emitter circuit contained within the transistor package. This internal resistance rEE is significant when the (external) emitter resistor RE is small, or even zero. The value of internal resistance REE is a function of emitter current IE, Table below.
For reference the 26mV approximation is listed as equation rEE in Figure below.
Emitter-bias equations with internal emitter resistance rEE included..
The more accurate emitter-bias equations in Figure above may be derived by writing a KVL equation. Alternatively, start with equations IE emitter-bias and RB emitter-bias in Figure previous, substituting RE with rEE+RE. The result is equations IE EB and RB EB, respectively in Figure above.
Redo the RB calculation in the previous example emitter-bias with the inclusion of rEE and compare the results.
The inclusion of rEE in the calculation results in a lower value of the base resistor RB a shown in Table below. It falls below the standard value 82k resistor instead of above it.
Bypass Capacitor for RE
One problem with emitter bias is that a considerable part of the output signal is dropped across the emitter resistor RE (Figure below). This voltage drop across the emitter resistor is in series with the base and of opposite polarity compared with the input signal. (This is similar to a common collector configuration having <1 gain.) This degeneration severely reduces the gain from base to collector. The solution for AC signal amplifiers is to bypass the emitter resistor with a capacitor. This restores the AC gain since the capacitor is a short for AC signals. The DC emitter current still experiences degeneration in the emitter resistor, thus, stabilizing the DC current.
Cbypass is required to prevent AC gain reduction.
What value should the bypass capacitor be? That depends on the lowest frequency to be amplified. For radio frequencies Cbpass would be small. For an audio amplifier extending down to 20Hz it will be large. A “rule of thumb” for the bypass capacitor is that the reactance should be 1/10 of the emitter resistance or less. The capacitor should be designed to accommodate the lowest frequency being amplified. The capacitor for an audio amplifier covering 20Hz to 20kHz would be:
Note that the internal emitter resistance rEE is not bypassed by the bypass capacitor.
Voltage Divider Bias
Stable emitter bias requires a low voltage base bias supply, Figure below. The alternative to a base supply VBB is a voltage divider based on the collector supply VCC.
Voltage Divider bias replaces base battery with voltage divider.
The design technique is to first work out an emitter-bias design, Then convert it to the voltage divider bias configuration by using Thevenin’s Theorem. [TK1] The steps are shown graphically in Figure below. Draw the voltage divider without assigning values. Break the divider loose from the base. (The base of the transistor is the load.) Apply Thevenin’s Theorem to yield a single Thevenin equivalent resistance Rth and voltage source Vth.
Thevenin’s Theorem converts voltage divider to single supply Vth and resistance Rth.
The Thevenin equivalent resistance is the resistance from load point (arrow) with the battery (VCC) reduced to 0 (ground). In other words, R1||R2.The Thevenin equivalent voltage is the open circuit voltage (load removed). This calculation is by the voltage divider ratio method. R1 is obtained by eliminating R2 from the pair of equations for Rth and Vth. The equation of R1 is in terms of known quantities Rth, Vth, Vcc. Note that Rth is RB , the bias resistor from the emitter-bias design. The equation for R2 is in terms of R1 and Rth.
Convert this previous emitter-bias example to voltage divider bias.
Emitter-bias example converted to voltage divider bias.
These values were previously selected or calculated for an emitter-bias example
Substituting VCC , VBB , RB yields R1 and R2 for the voltage divider bias configuration.
R1 is a standard value of 220K. The closest standard value for R2 corresponding to 38.8k is 39k. This does not change IE enough for us to calculate it.
Problem: Calculate the bias resistors for the cascode amplifier in Figure below. VB2 is the bias voltage for the common emitter stage. VB1 is a fairly high voltage at 11.5 because we want the common-base stage to hold the emitter at 11.5-0.7=10.8V, about 11V. (It will be 10V after accounting for the voltage drop across RB1 .) That is, the common-base stage is the load, substitute for a resistor, for the common-emitter stage’s collector. We desire a 1mA emitter current.
Bias for a cascode amplifier. Problem: Convert the base bias resistors for the cascode amplifier to voltage divider bias resistors driven by the VCC of 20V.
The final circuit diagram is shown in the “Practical Analog Circuits” chapter, “Class A cascode amplifier . . . ” cascode, Ch 9 .
Review
• See Figure below.
• Select bias circuit configuration
• Select RC and IE for the intended application. The values for RC and IE should normally set collector voltage VC to 1/2 of VCC.
• Calculate base resistor RB to achieve desired emitter current.
• Recalculate emitter current IE for standard value resistors if necessary.
• For voltage divider bias, perform emitter-bias calculations first, then determine R1 and R2.
• For AC amplifiers, a bypass capacitor in parallel with RE improves AC gain. Set XC≤0.10RE for lowest frequency.
Biasing equations summary. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.10%3A_Transistor_Biasing_Calculations.txt |
To overcome the challenge of creating necessary DC bias voltage for an amplifier’s input signal without resorting to the insertion of a battery in series with the AC signal source, we used a voltage divider connected across the DC power source. To make this work in conjunction with an AC input signal, we “coupled” the signal source to the divider through a capacitor, which acted as a high-pass filter. With that filtering in place, the low impedance of the AC signal source couldn’t “short out” the DC voltage dropped across the bottom resistor of the voltage divider. A simple solution, but not without any disadvantages.
Most obvious is the fact that using a high-pass filter capacitor to couple the signal source to the amplifier means that the amplifier can only amplify AC signals. A steady, DC voltage applied to the input would be blocked by the coupling capacitor just as much as the voltage divider bias voltage is blocked from the input source. Furthermore, since capacitive reactance is frequency-dependent, lower-frequency AC signals will not be amplified as much as higher-frequency signals. Non-sinusoidal signals will tend to be distorted, as the capacitor responds differently to each of the signal’s constituent harmonics. An extreme example of this would be a low-frequency square-wave signal in Figure below.
Capacitively coupled low frequency square-wave shows distortion.
Incidentally, this same problem occurs when oscilloscope inputs are set to the “AC coupling” mode as in Figure below. In this mode, a coupling capacitor is inserted in series with the measured voltage signal to eliminate any vertical offset of the displayed waveform due to DC voltage combined with the signal. This works fine when the AC component of the measured signal is of a fairly high frequency, and the capacitor offers little impedance to the signal. However, if the signal is of a low frequency, or contains considerable levels of harmonics over a wide frequency range, the oscilloscope’s display of the waveform will not be accurate. (Figure below) Low frequency signals may be viewed by setting the oscilloscope to “DC coupling” in Figure below.
With DC coupling, the oscilloscope properly indicates the shape of the square wave coming from the signal generator.
Low frequency: With AC coupling, the high-pass filtering of the coupling capacitor distorts the square wave’s shape so that what is seen is not an accurate representation of the real signal.
In applications where the limitations of capacitive coupling (Figure above) would be intolerable, another solution may be used: direct coupling. Direct coupling avoids the use of capacitors or any other frequency-dependent coupling component in favor of resistors. A direct-coupled amplifier circuit is shown in Figure below.
Direct coupled amplifier: direct coupling to speaker.
With no capacitor to filter the input signal, this form of coupling exhibits no frequency dependence. DC and AC signals alike will be amplified by the transistor with the same gain (the transistor itself may tend to amplify some frequencies better than others, but that is another subject entirely!).
If direct coupling works for DC as well as for AC signals, then why use capacitive coupling for any application? One reason might be to avoid any unwanted DC bias voltage naturally present in the signal to be amplified. Some AC signals may be superimposed on an uncontrolled DC voltage right from the source, and an uncontrolled DC voltage would make reliable transistor biasing impossible. The high-pass filtering offered by a coupling capacitor would work well here to avoid biasing problems.
Another reason to use capacitive coupling rather than direct is its relative lack of signal attenuation. Direct coupling through a resistor has the disadvantage of diminishing or attenuating, the input signal so that only a fraction of it reaches the base of the transistor. In many applications, some attenuation is necessary anyway to prevent signal levels from “overdriving” the transistor into cutoff and saturation, so any attenuation inherent to the coupling network is useful anyway. However, some applications require that there be no signal loss from the input connection to the transistor’s base for maximum voltage gain, and a direct coupling scheme with a voltage divider for bias simply won’t suffice.
So far, we’ve discussed a couple of methods for coupling an input signal to an amplifier, but haven’t addressed the issue of coupling an amplifier’s output to a load. The example circuit used to illustrate input coupling will serve well to illustrate the issues involved with output coupling.
In our example circuit, the load is a speaker. Most speakers are electromagnetic in design: that is, they use the force generated by a lightweight electromagnet coil suspended within a strong permanent-magnet field to move a thin paper or plastic cone, producing vibrations in the air which our ears interpret as sound. An applied voltage of one polarity moves the cone outward, while a voltage of the opposite polarity will move the cone inward. To exploit cone’s full freedom of motion, the speaker must receive true (unbiased) AC voltage. DC bias applied to the speaker coil offsets the cone from its natural center position, and this limits the back-and-forth motion it can sustain from the applied AC voltage without over traveling. However, our example circuit (Figure above) applies a varying voltage of only one polarity across the speaker, because the speaker is connected in series with the transistor which can only conduct current one way. This would be unacceptable for any high-power audio amplifier.
Somehow we need to isolate the speaker from the DC bias of the collector current so that it only receives AC voltage. One way to achieve this goal is to couple the transistor collector circuit to the speaker through a transformer in Figure below)
Transformer coupling isolates DC from the load (speaker).
The voltage induced in the secondary (speaker-side) of the transformer will be strictly due to variations in collector current because the mutual inductance of a transformer only works on changes in winding current. In other words, only the AC portion of the collector current signal will be coupled to the secondary side for powering the speaker. The speaker will “see” true alternating current at its terminals, without any DC bias.
Transformer output coupling works and has the added benefit of being able to provide impedance matching between the transistor circuit and the speaker coil with custom winding ratios. However, transformers tend to be large and heavy, especially for high-power applications. Also, it is difficult to engineer a transformer to handle signals over a wide range of frequencies, which is almost always required for audio applications. To make matters worse, DC current through the primary winding adds to the magnetization of the core in one polarity only, which tends to make the transformer core saturate more easily in one AC polarity cycle than the other. This problem is reminiscent of having the speaker directly connected in series with the transistor: a DC bias current tends to limit how much output signal amplitude the system can handle without distortion. Generally, though, a transformer can be designed to handle a lot more DC bias current than a speaker without running into trouble, so transformer coupling is still a viable solution in most cases. See the coupling transformer between Q4 and the speaker, Regency TR1, Ch 9 as an example of transformer coupling.
Another method to isolate the speaker from DC bias in the output signal is to alter the circuit a bit and use a coupling capacitor in a manner similar to coupling the input signal (Figure below) to the amplifier.
Capacitor coupling isolates DC from the load.
This circuit in Figure above resembles the more conventional form of common-emitter amplifier, with the transistor collector connected to the battery through a resistor. The capacitor acts as a high-pass filter, passing most of the AC voltage to the speaker while blocking all DC voltage. Again, the value of this coupling capacitor is chosen so that its impedance at the expected signal frequency will be arbitrarily low.
The blocking of DC voltage from an amplifier’s output, be it via a transformer or a capacitor, is useful not only in coupling an amplifier to a load, but also in coupling one amplifier to another amplifier. “Staged” amplifiers are often used to achieve higher power gains than what would be possible using a single transistor as in Figure below.
Capacitor coupled three stage common-emitter amplifier.
While it is possible to directly couple each stage to the next (via a resistor rather than a capacitor), this makes the whole amplifier very sensitive to variations in the DC bias voltage of the first stage, since that DC voltage will be amplified along with the AC signal until the last stage. In other words, the biasing of the first stage will affect the biasing of the second stage, and so on. However, if the stages are capacitively coupled shown in the above illustration, the biasing of one stage has no effect on the biasing of the next, because DC voltage is blocked from passing on to the next stage.
Transformer coupling between amplifier stages is also a possibility, but less often seen due to some of the problems inherent to transformers mentioned previously. One notable exception to this rule is in radio-frequency amplifiers (Figure below) with small coupling transformers, having air cores (making them immune to saturation effects), that are part of a resonant circuit to block unwanted harmonic frequencies from passing on to subsequent stages. The use of resonant circuits assumes that the signal frequency remains constant, which is typical of radio circuitry. Also, the “flywheel” effect of LC tank circuits allows for class C operation for high efficiency.
Three stage tuned RF amplifier illustrates transformer coupling.
Note the transformer coupling between transistors Q1, Q2, Q3, and Q4, Regency TR1, Ch 9 . The three intermediate frequency (IF) transformers within the dashed boxes couple the IF signal from collector to base of following transistor IF amplifiers. The intermediate freqency ampliers are RF amplifiers, though, at a different frequency than the antenna RF input.
Having said all this, it must be mentioned that it is possible to use direct coupling within a multi-stage transistor amplifier circuit. In cases where the amplifier is expected to handle DC signals, this is the only alternative.
The trend of electronics to more widespread use of integrated circuits has encouraged the use of direct coupling over transformer or capacitor coupling. The only easily manufactured integrated circuit component is the transistor. Moderate quality resistors can also be produced. Though, transistors are favored. Integrated capacitors to only a few 10’s of pF are possible. Large capacitors are not integrable. If necessary, these can be external components. The same is true of transformers. Since integrated transistors are inexpensive, as many transistors as possible are substituted for the offending capacitors and transformers. As much direct coupled gain as possible is designed into ICs between the external coupling components. While external capacitors and transformers are used, these are even being designed out if possible. The result is that a modern IC radio (See “IC radio”, Ch 9 ) looks nothing like the original 4-transistor radio Regency TR1, Ch 9 .
Even discrete transistors are inexpensive compared with transformers. Bulky audio transformers can be replaced by transistors. For example, a common-collector (emitter follower) configuration can impedance match a low output impedance like a speaker. It is also possible to replace large coupling capacitors with transistor circuitry.
We still like to illustrate texts with transformer coupled audio amplifiers. The circuits are simple. The component count is low. And, these are good introductory circuits— easy to understand.
The circuit in Figure below (a) is a simplified transformer coupled push-pull audio amplifier. In push-pull, pair of transistors alternately amplify the positive and negative portions of the input signal. Neither transistor nor the other conducts for no signal input. A positive input signal will be positive at the top of the transformer secondary causing the top transistor to conduct. A negative input will yield a positive signal at the bottom of the secondary, driving the bottom transistor into conduction. Thus the transistors amplify alternate halves of a signal. As drawn, neither transistor in Figure below (a) will conduct for an input below 0.7 Vpeak. A practical circuit connects the secondary center tap to a 0.7 V (or greater) resistor divider instead of ground to bias both transistor for true class B..
(a) Transformer coupled push-pull amplifier. (b) Direct coupled complementary-pair amplifier replaces transformers with transistors.
The circuit in Figure above (b) is the modern version which replaces the transformer functions with transistors. Transistors Q1 and Q2 are common emitter amplifiers, inverting the signal with gain from base to collector. Transistors Q3 and Q4 are known as a complementary pair because these NPN and PNP transistors amplify alternate halves (positive and negative, respectively) of the waveform. The parallel connection the bases allows phase splitting without an input transformer at (a). The speaker is the emitter load for Q3 and Q4. Parallel connection of the emitters of the NPN and PNP transistors eliminates the center-tapped output transformer at (a) The low output impedance of the emitter follower serves to match the low 8 Ω impedance of the speaker to the preceding common emitter stage. Thus, inexpensive transistors replace transformers. For the complete circuit see “Direct coupled complementary symmetry 3 w audio amplifier,”Ch 9
Review
• Capacitive coupling acts like a high-pass filter on the input of an amplifier. This tends to make the amplifier’s voltage gain decrease at lower signal frequencies. Capacitive-coupled amplifiers are all but unresponsive to DC input signals.
• Direct coupling with a series resistor instead of a series capacitor avoids the problem of frequency-dependent gain, but has the disadvantage of reducing amplifier gain for all signal frequencies by attenuating the input signal.
• Transformers and capacitors may be used to couple the output of an amplifier to a load, to eliminate DC voltage from getting to the load.
• Multi-stage amplifiers often make use of capacitive coupling between stages to eliminate problems with the bias from one stage affecting the bias of another. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.11%3A_Input_and_Output_Coupling.txt |
If some percentage of an amplifier’s output signal is connected to the input, so that the amplifier amplifies part of its own output signal, we have what is known as feedback. Feedback comes in two varieties: positive(also called regenerative), and negative (also called degenerative). Positive feedback reinforces the direction of an amplifier’s output voltage change, while negative feedback does just the opposite.
A familiar example of feedback happens in public-address (“PA”) systems where someone holds the microphone too close to a speaker: a high-pitched “whine” or “howl” ensues, because the audio amplifier system is detecting and amplifying its own noise. Specifically, this is an example of positive or regenerative feedback, as any sound detected by the microphone is amplified and turned into a louder sound by the speaker, which is then detected by the microphone again, and so on . . . the result being a noise of steadily increasing volume until the system becomes “saturated” and cannot produce any more volume.
One might wonder what possible benefit feedback is to an amplifier circuit, given such an annoying example as PA system “howl.” If we introduce positive, or regenerative, feedback into an amplifier circuit, it has the tendency of creating and sustaining oscillations, the frequency of which determined by the values of components handling the feedback signal from output to input. This is one way to make an oscillator circuit to produce AC from a DC power supply. Oscillators are very useful circuits, and so feedback has a definite, practical application for us. See “Phase shift oscillator” , Ch 9 for a practical application of positive feedback.
Negative feedback, on the other hand, has a “dampening” effect on an amplifier: if the output signal happens to increase in magnitude, the feedback signal introduces a decreasing influence into the input of the amplifier, thus opposing the change in output signal. While positive feedback drives an amplifier circuit toward a point of instability (oscillations), negative feedback drives it the opposite direction: toward a point of stability.
An amplifier circuit equipped with some amount of negative feedback is not only more stable, but it distorts the input waveform less and is generally capable of amplifying a wider range of frequencies. The tradeoff for these advantages (there just has to be a disadvantage to negative feedback, right?) is decreased gain. If a portion of an amplifier’s output signal is “fed back” to the input to oppose any changes in the output, it will require a greater input signal amplitude to drive the amplifier’s output to the same amplitude as before. This constitutes a decreased gain. However, the advantages of stability, lower distortion, and greater bandwidth are worth the tradeoff in reduced gain for many applications.
Let’s examine a simple amplifier circuit and see how we might introduce negative feedback into it, starting with Figure below.
Common-emitter amplifier without feedback.
The amplifier configuration shown here is a common-emitter, with a resistor bias network formed by R1 and R2. The capacitor couples Vinput to the amplifier so that the signal source doesn’t have a DC voltage imposed on it by the R1/R2 divider network. Resistor R3 serves the purpose of controlling voltage gain. We could omit it for maximum voltage gain, but since base resistors like this are common in common-emitter amplifier circuits, we’ll keep it in this schematic.
Like all common-emitter amplifiers, this one inverts the input signal as it is amplified. In other words, a positive-going input voltage causes the output voltage to decrease, or move toward negative, and vice versa. The oscilloscope waveforms are shown in Figure below.
Common-emitter amplifier, no feedback, with reference waveforms for comparison.
Because the output is an inverted, or mirror-image, reproduction of the input signal, any connection between the output (collector) wire and the input (base) wire of the transistor in Figure below will result in negative feedback.
Negative feedback, collector feedback, decreases the output signal.
The resistances of R1, R2, R3, and Rfeedback function together as a signal-mixing network so that the voltage seen at the base of the transistor (with respect to ground) is a weighted average of the input voltage and the feedback voltage, resulting in signal of reduced amplitude going into the transistor. So, the amplifier circuit in Figure above will have reduced voltage gain, but improved linearity (reduced distortion) and increased bandwidth.
A resistor connecting collector to base is not the only way to introduce negative feedback into this amplifier circuit, though. Another method, although more difficult to understand at first, involves the placement of a resistor between the transistor’s emitter terminal and circuit ground in Figure below.
Emitter feedback: A different method of introducing negative feedback into a circuit.
This new feedback resistor drops voltage proportional to the emitter current through the transistor, and it does so in such a way as to oppose the input signal’s influence on the base-emitter junction of the transistor. Let’s take a closer look at the emitter-base junction and see what difference this new resistor makes in Figure below.
With no feedback resistor connecting the emitter to ground in Figure below (a) , whatever level of input signal (Vinput) makes it through the coupling capacitor and R1/R2/R3 resistor network will be impressed directly across the base-emitter junction as the transistor’s input voltage (VB-E). In other words, with no feedback resistor, VB-E equals Vinput. Therefore, if Vinput increases by 100 mV, then VB-E increases by 100 mV: a change in one is the same as a change in the other, since the two voltages are equal to each other.
Now let’s consider the effects of inserting a resistor (Rfeedback) between the transistor’s emitter lead and ground in Figure below (b).
(a) No feedback vs (b) emitter feedback. A waveform at the collector is inverted with respect to the base. At (b) the emitter waveform is in-phase (emitter follower) with base, out of phase with collector. Therefore, the emitter signal subtracts from the collector output signal.
Note how the voltage dropped across Rfeedback adds with VB-E to equal Vinput. With Rfeedback in the Vinput—VB-E loop, VB-E will no longer be equal to Vinput. We know that Rfeedback will drop a voltage proportional to emitter current, which is in turn controlled by the base current, which is in turn controlled by the voltage dropped across the base-emitter junction of the transistor (VB-E). Thus, if Vinput were to increase in a positive direction, it would increase VB-E, causing more base current, causing more collector (load) current, causing more emitter current, and causing more feedback voltage to be dropped across Rfeedback. This increase of voltage drop across the feedback resistor, though, subtracts from Vinput to reduce the VB-E, so that the actual voltage increase for VB-E will be less than the voltage increase of Vinput. No longer will a 100 mV increase in Vinput result in a full 100 mV increase for VB-E, because the two voltages are not equal to each other.
Consequently, the input voltage has less control over the transistor than before, and the voltage gain for the amplifier is reduced: just what we expected from negative feedback.
In practical common-emitter circuits, negative feedback isn’t just a luxury; its a necessity for stable operation. In a perfect world, we could build and operate a common-emitter transistor amplifier with no negative feedback, and have the full amplitude of Vinput impressed across the transistor’s base-emitter junction. This would give us a large voltage gain. Unfortunately, though, the relationship between base-emitter voltage and base-emitter current changes with temperature, as predicted by the “diode equation.” As the transistor heats up, there will be less of a forward voltage drop across the base-emitter junction for any given current. This causes a problem for us, as the R1/R2 voltage divider network is designed to provide the correct quiescent current through the base of the transistor so that it will operate in whatever class of operation we desire (in this example, I’ve shown the amplifier working in class-A mode). If the transistor’s voltage/current relationship changes with temperature, the amount of DC bias voltage necessary for the desired class of operation will change. A hot transistor will draw more bias current for the same amount of bias voltage, making it heat up even more, drawing even more bias current. The result, if unchecked, is called thermal runaway.
Common-collector amplifiers, (Figure below) however, do not suffer from thermal runaway. Why is this? The answer has everything to do with negative feedback.
Common collector (emitter follower) amplifier.
Note that the common-collector amplifier (Figure above) has its load resistor placed in exactly the same spot as we had the Rfeedback resistor in the last circuit in Figure above (b): between emitter and ground. This means that the only voltage impressed across the transistor’s base-emitter junction is the difference between Vinput and Voutput, resulting in a very low voltage gain (usually close to 1 for a common-collector amplifier). Thermal runaway is impossible for this amplifier: if base current happens to increase due to transistor heating, emitter current will likewise increase, dropping more voltage across the load, which in turn subtracts from Vinput to reduce the amount of voltage dropped between base and emitter. In other words, the negative feedback afforded by placement of the load resistor makes the problem of thermal runaway self-correcting. In exchange for a greatly reduced voltage gain, we get superb stability and immunity from thermal runaway.
By adding a “feedback” resistor between emitter and ground in a common-emitter amplifier, we make the amplifier behave a little less like an “ideal” common-emitter and a little more like a common-collector. The feedback resistor value is typically quite a bit less than the load, minimizing the amount of negative feedback and keeping the voltage gain fairly high.
Another benefit of negative feedback, seen clearly in the common-collector circuit, is that it tends to make the voltage gain of the amplifier less dependent on the characteristics of the transistor. Note that in a common-collector amplifier, voltage gain is nearly equal to unity (1), regardless of the transistor’s β. This means, among other things, that we could replace the transistor in a common-collector amplifier with one having a different β and not see any significant changes in voltage gain. In a common-emitter circuit, the voltage gain is highly dependent on β. If we were to replace the transistor in a common-emitter circuit with another of differing β, the voltage gain for the amplifier would change significantly. In a common-emitter amplifier equipped with negative feedback, the voltage gain will still be dependent upon transistor β to some degree, but not as much as before, making the circuit more predictable despite variations in transistor β.
The fact that we have to introduce negative feedback into a common-emitter amplifier to avoid thermal runaway is an unsatisfying solution. Is it possible to avoid thermal runaway without having to suppress the amplifier’s inherently high voltage gain? A best-of-both-worlds solution to this dilemma is available to us if we closely examine the problem: the voltage gain that we have to minimize in order to avoid thermal runaway is the DC voltage gain, not the AC voltage gain. After all, it isn’t the AC input signal that fuels thermal runaway: its the DC bias voltage required for a certain class of operation: that quiescent DC signal that we use to “trick” the transistor (fundamentally a DC device) into amplifying an AC signal. We can suppress DC voltage gain in a common-emitter amplifier circuit without suppressing AC voltage gain if we figure out a way to make the negative feedback only function with DC. That is, if we only feed back an inverted DC signal from output to input, but not an inverted AC signal.
The Rfeedback emitter resistor provides negative feedback by dropping a voltage proportional to load current. In other words, negative feedback is accomplished by inserting an impedance into the emitter current path. If we want to feedback DC but not AC, we need an impedance that is high for DC but low for AC. What kind of circuit presents a high impedance to DC but a low impedance to AC? A high-pass filter, of course!
By connecting a capacitor in parallel with the feedback resistor in Figure below, we create the very situation we need: a path from emitter to ground that is easier for AC than it is for DC.
High AC voltage gain reestablished by adding Cbypass in parallel with Rfeedback
The new capacitor “bypasses” AC from the transistor’s emitter to ground, so that no appreciable AC voltage will be dropped from emitter to ground to “feed back” to the input and suppress voltage gain. Direct current, on the other hand, cannot go through the bypass capacitor, and so must travel through the feedback resistor, dropping a DC voltage between emitter and ground which lowers the DC voltage gain and stabilizes the amplifier’s DC response, preventing thermal runaway. Because we want the reactance of this capacitor (XC) to be as low as possible, Cbypass should be sized relatively large. Because the polarity across this capacitor will never change, it is safe to use a polarized (electrolytic) capacitor for the task.
Another approach to the problem of negative feedback reducing voltage gain is to use multi-stage amplifiers rather than single-transistor amplifiers. If the attenuated gain of a single transistor is insufficient for the task at hand, we can use more than one transistor to make up for the reduction caused by feedback. An example circuit showing negative feedback in a three-stage common-emitter amplifier is Figure below.
Feedback around an “odd” number of direct coupled stages produce negative feedback.
The feedback path from the final output to the input is through a single resistor, Rfeedback. Since each stage is a common-emitter amplifier (thus inverting), the odd number of stages from input to output will invert the output signal; the feedback will be negative (degenerative). Relatively large amounts of feedback may be used without sacrificing voltage gain, because the three amplifier stages provide much gain to begin with.
At first, this design philosophy may seem inelegant and perhaps even counter-productive. Isn’t this a rather crude way to overcome the loss in gain incurred through the use of negative feedback, to simply recover gain by adding stage after stage? What is the point of creating a huge voltage gain using three transistor stages if we’re just going to attenuate all that gain anyway with negative feedback? The point, though perhaps not apparent at first, is increased predictability and stability from the circuit as a whole. If the three transistor stages are designed to provide an arbitrarily high voltage gain (in the tens of thousands, or greater) with no feedback, it will be found that the addition of negative feedback causes the overall voltage gain to become less dependent of the individual stage gains, and approximately equal to the simple ratio Rfeedback/Rin. The more voltage gain the circuit has (without feedback), the more closely the voltage gain will approximate Rfeedback/Rin once feedback is established. In other words, voltage gain in this circuit is fixed by the values of two resistors, and nothing more.
This is an advantage for mass-production of electronic circuitry: if amplifiers of predictable gain may be constructed using transistors of widely varied β values, it eases the selection and replacement of components. It also means the amplifier’s gain varies little with changes in temperature. This principle of stable gain control through a high-gain amplifier “tamed” by negative feedback is elevated almost to an art form in electronic circuits called operational amplifiers, or op-amps. You may read much more about these circuits in a later chapter of this book!
Review
• Feedback is the coupling of an amplifier’s output to its input.
• Positive, or regenerative feedback has the tendency of making an amplifier circuit unstable, so that it produces oscillations (AC). The frequency of these oscillations is largely determined by the components in the feedback network.
• Negative, or degenerative feedback has the tendency of making an amplifier circuit more stable, so that its output changes less for a given input signal than without feedback. This reduces the gain of the amplifier, but has the advantage of decreasing distortion and increasing bandwidth (the range of frequencies the amplifier can handle).
• Negative feedback may be introduced into a common-emitter circuit by coupling collector to base, or by inserting a resistor between emitter and ground.
• An emitter-to-ground “feedback” resistor is usually found in common-emitter circuits as a preventative measure against thermal runaway.
• Negative feedback also has the advantage of making amplifier voltage gain more dependent on resistor values and less dependent on the transistor’s characteristics.
• Common-collector amplifiers have much negative feedback, due to the placement of the load resistor between emitter and ground. This feedback accounts for the extremely stable voltage gain of the amplifier, as well as its immunity against thermal runaway.
• Voltage gain for a common-emitter circuit may be re-established without sacrificing immunity to thermal runaway, by connecting a bypass capacitor in parallel with the emitter “feedback resistor.”
• If the voltage gain of an amplifier is arbitrarily high (tens of thousands, or greater), and negative feedback is used to reduce the gain to reasonable levels, it will be found that the gain will approximately equal Rfeedback/Rin. Changes in transistor β or other internal component values will have little effect on voltage gain with feedback in operation, resulting in an amplifier that is stable and easy to design. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.12%3A_Feedback.txt |
Input impedance varies considerably with the circuit configuration shown in Figure below. It also varies with biasing. Not considered here, the input impedance is complex and varies with frequency. For the common-emitter and common-collector it is base resistance times β. The base resistance can be both internal and external to the transistor. For the common-collector:
It is a bit more complicated for the common-emitter circuit. We need to know the internal emitter resistance rEE. This is given by:
Thus, for the common-emitter circuit Rin is
As an example the input resistance of a, β = 100, CE configuration biased at 1 mA is:
Moreover, a more accurate Rin for the common-collector should have included Re’
This equation (above) is also applicable to a common-emitter configuration with an emitter resistor.
Input impedance for the common-base configuration is Rin = rEE.
The high input impedance of the common-collector configuration matches high impedance sources. A crystal or ceramic microphone is one such high impedance source. The common-base arrangement is sometimes used in RF (radio frequency) circuits to match a low impedance source, for example, a 50 Ω coaxial cable feed. For moderate impedance sources, the common-emitter is a good match. An example is a dynamic microphone.
The output impedances of the three basic configurations are listed in Figure below. The moderate output impedance of the common-emitter configuration helps make it a popular choice for general use. The Low output impedance of the common-collector is put to good use in impedance matching, for example, tranformerless matching to a 4 Ohm speaker. There do not appear to be any simple formulas for the output impedances. However, R. Victor Jones develops expressions for output resistance. [RVJ]
Amplifier characteristics, adapted from GE Transistor Manual, Figure 1.21.[GET]
Review
• See Figure above.
4.14: Current Mirror BJTs
Bipolar Junction Transistor or BJT Current Mirror
An often-used circuit applying the bipolar junction transistor is the so-called current mirror, which serves as a simple current regulator, supplying nearly constant current to a load over a wide range of load resistances.
We know that in a transistor operating in its active mode, collector current is equal to base current multiplied by the ratio β. We also know that the ratio between collector current and emitter current is called α. Because collector current is equal to base current multiplied by β, and emitter current is the sum of the base and collector currents, α should be mathematically derivable from β. If you do the algebra, you’ll find that α = β/(β+1) for any transistor.
We’ve seen already how maintaining a constant base current through an active transistor results in the regulation of collector current, according to the β ratio. Well, the α ratio works similarly: if emitter current is held constant, collector current will remain at a stable, regulated value so long as the transistor has enough collector-to-emitter voltage drop to maintain it in its active mode. Therefore, if we have a way of holding emitter current constant through a transistor, the transistor will work to regulate collector current at a constant value.
Remember that the base-emitter junction of a BJT is nothing more than a PN junction, just like a diode, and that the “diode equation” specifies how much current will go through a PN junction given forward voltage drop and junction temperature:
If both junction voltage and temperature are held constant, then the PN junction current will be constant. Following this rationale, if we were to hold the base-emitter voltage of a transistor constant, then its emitter current will be constant, given a constant temperature. (Figure below)
Constant VBE gives constant IB, constant IE, and constant IC.
This constant emitter current, multiplied by a constant α ratio, gives a constant collector current through Rload, if enough battery voltage is available to keep the transistor in its active mode for any change in Rload‘s resistance.
To maintain a constant voltage across the transistor’s base-emitter junction use a forward-biased diode to establish a constant voltage of approximately 0.7 volts, and connect it in parallel with the base-emitter junction as in Figure below.
Diode junction 0.7 V maintains constant base voltage, and constant base current.
The voltage dropped across the diode probably won’t be 0.7 volts exactly. The exact amount of forward voltage dropped across it depends on the current through the diode, and the diode’s temperature, all in accordance with the diode equation. If diode current is increased (say, by reducing the resistance of Rbias), its voltage drop will increase slightly, increasing the voltage drop across the transistor’s base-emitter junction, which will increase the emitter current by the same proportion, assuming the diode’s PN junction and the transistor’s base-emitter junction are well-matched to each other. In other words, transistor emitter current will closely equal diode current at any given time. If you change the diode current by changing the resistance value of Rbias, then the transistor’s emitter current will follow suit, because the emitter current is described by the same equation as the diode’s, and both PN junctions experience the same voltage drop.
Remember, the transistor’s collector current is almost equal to its emitter current, as the α ratio of a typical transistor is almost unity (1). If we have control over the transistor’s emitter current by setting diode current with a simple resistor adjustment, then we likewise have control over the transistor’s collector current. In other words, collector current mimics, or mirrors, diode current.
Current through resistor Rload is therefore a function of current set by the bias resistor, the two being nearly equal. This is the function of the current mirror circuit: to regulate current through the load resistor by conveniently adjusting the value of Rbias. Current through the diode is described by a simple equation: power supply voltage minus diode voltage (almost a constant value), divided by the resistance of Rbias.
To better match the characteristics of the two PN junctions (the diode junction and the transistor base-emitter junction), a transistor may be used in place of a regular diode, as in Figure below (a).
Current mirror circuits.
Because temperature is a factor in the “diode equation,” and we want the two PN junctions to behave identically under all operating conditions, we should maintain the two transistors at exactly the same temperature. This is easily done using discrete components by gluing the two transistor cases back-to-back. If the transistors are manufactured together on a single chip of silicon (as a so-called integrated circuit, or IC), the designers should locate the two transistors close to one another to facilitate heat transfer between them.
The current mirror circuit shown with two NPN transistors in Figure above (a) is sometimes called a current-sinking type, because the regulating transistor conducts current to the load from ground (“sinking” current), rather than from the positive side of the battery (“sourcing” current). If we wish to have a grounded load, and a current sourcing mirror circuit, we may use PNP transistors like Figure above (b).
While resistors can be manufactured in ICs, it is easier to fabricate transistors. IC designers avoid some resistors by replacing load resistors with current sources. A circuit like an operational amplifier built from discrete components will have a few transistors and many resistors. An integrated circuit version will have many transistors and a few resistors. In Figure below One voltage reference, Q1, drives multiple current sources: Q2, Q3, and Q4. If Q2 and Q3 are equal area transistors the load currents Iload will be equal. If we need a 2·Iload, parallel Q2 and Q3. Better yet fabricate one transistor, say Q3 with twice the area of Q2. Current I3 will then be twice I2. In other words, load current scales with transistor area.
Multiple current mirrors may be slaved from a single (Q1 - Rbias) voltage source.
Note that it is customary to draw the base voltage line right through the transistor symbols for multiple current mirrors! Or in the case of Q4 in Figure above, two current sources are associated with a single transistor symbol. The load resistors are drawn almost invisible to emphasize the fact that these do not exist in most cases. The load is often another (multiple) transistor circuit, say a pair of emitters of a differential amplifier, for example Q3 and Q4 in “A simple operational amplifier”, Ch 8 . Often, the collector load of a transistor is not a resistor but a current mirror. For example the collector load of Q4 collector , Ch 8 is a current mirror (Q2).
For an example of a current mirror with multiple collector outputs see Q13 in the model 741 op-amp , Ch 8 . The Q13 current mirror outputs substitute for resistors as collector loads for Q15 and Q17. We see from these examples that current mirrors are preferred as loads over resistors in integrated circuitry.
Review
• A current mirror is a transistor circuit that regulates current through a load resistance, the regulation point being set by a simple resistor adjustment.
• Transistors in a current mirror circuit must be maintained at the same temperature for precise operation. When using discrete transistors, you may glue their cases together to do this.
• Current mirror circuits may be found in two basic varieties: the current sinking configuration, where the regulating transistor connects the load to ground; and the current sourcing configuration, where the regulating transistor connects the load to the positive terminal of the DC power supply. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.13%3A_Amplifier_Impedances.txt |
Power dissipation: When a transistor conducts current between collector and emitter, it also drops voltage between those two points. At any given time, the power dissipated by a transistor is equal to the product (multiplication) of collector current and collector-emitter voltage. Just like resistors, transistors are rated for how many watts each can safely dissipate without sustaining damage. High temperature is the mortal enemy of all semiconductor devices, and bipolar transistors tend to be more susceptible to thermal damage than most. Power ratings are always referenced to the temperature of ambient (surrounding) air. When transistors are to be used in hotter environments (>25o, their power ratings must be derated to avoid a shortened service life.
Reverse voltages: As with diodes, bipolar transistors are rated for maximum allowable reverse-bias voltage across their PN junctions. This includes voltage ratings for the emitter-base junction VEB , collector-base junction VCB , and also from collector to emitter VCE .
VEB , the maximum reverse voltage from emitter to base is approximately 7 V for some small signal transistors. Some circuit designers use discrete BJTs as 7 V zener diodes with a series current limiting resistor. Transistor inputs to analog integrated circuits also have a VEB rating, which if exceeded will cause damage, no zenering of the inputs is allowed.
The rating for maximum collector-emitter voltage VCE can be thought of as the maximum voltage it can withstand while in full-cutoff mode (no base current). This rating is of particular importance when using a bipolar transistor as a switch. A typical value for a small signal transistor is 60 to 80 V. In power transistors, this could range to 1000 V, for example, a horizontal deflection transistor in a cathode ray tube display.
Collector current: A maximum value for collector current IC will be given by the manufacturer in amps. Typical values for small signal transistors are 10s to 100s of mA, 10s of A for power transistors. Understand that this maximum figure assumes a saturated state (minimum collector-emitter voltage drop). If the transistor is not saturated, and, in fact, is dropping substantial voltage between collector and emitter, the maximum power dissipation rating will probably be exceeded before the maximum collector current rating. Just something to keep in mind when designing a transistor circuit!
Saturation voltages: Ideally, a saturated transistor acts as a closed switch contact between collector and emitter, dropping zero voltage at full collector current. In reality, this is never true. Manufacturers will specify the maximum voltage drop of a transistor at saturation, both between the collector and emitter, and also between base and emitter (forward voltage drop of that PN junction). Collector-emitter voltage drop at saturation is generally expected to be 0.3 volts or less, but this figure is, of course, dependent on the specific type of transistor. Low voltage transistors, low VCE , show lower saturation voltages. The saturation voltage is also lower for higher base drive current.
Base-emitter forward voltage drop, kVBE , is similar to that of an equivalent diode, ≅0.7 V, which should come as no surprise.
Beta: The ratio of collector current to base current, β is the fundamental parameter characterizing the amplifying ability of a bipolar transistor. β is usually assumed to be a constant figure in circuit calculations, but unfortunately, this is far from true in practice. As such, manufacturers provide a set of β (or “hfe”) figures for a given transistor over a wide range of operating conditions, usually in the form of maximum/minimum/typical ratings. It may surprise you to see just how widely β can be expected to vary within normal operating limits. One popular small-signal transistor, the 2N3903, is advertised as having a β ranging from 15 to 150 depending on the amount of collector current. Generally, β is highest for medium collector currents, decreasing for very low and very high collector currents. hfe is small signal AC gain; hFE is large AC signal gain or DC gain.
Alpha: the ratio of collector current to emitter current, α=IC/IE . α may be derived from β, being α=β/(β+1) .
Bipolar transistors come in a wide variety of physical packages. Package type is primarily dependent upon the required power dissipation of the transistor, much like resistors: the greater the maximum power dissipation, the larger the device has to be to stay cool. Figure below shows several standardized package types for three-terminal semiconductor devices, any of which may be used to house a bipolar transistor. There are many other semiconductor devices other than bipolar transistors which have three connection points. Note that the pin-outs of plastic transistors can vary within a single package type, e.g. TO-92 in Figure below. It is impossible to positively identify a three-terminal semiconductor device without referencing the part number printed on it, or subjecting it to a set of electrical tests.
Transistor packages, dimensions in mm.
Small plastic transistor packages like the TO-92 can dissipate a few hundred milliwatts. The metal cans, TO-18 and TO-39 can dissipate more power, several hundred milliwatts. Plastic power transistor packages like the TO-220 and TO-247 dissipate well over 100 watts, approaching the dissipation of the all metal TO-3. The dissipation ratings listed in Figure above are the maximum ever encountered by the author for high powered devices. Most power transistors are rated at half or less than the listed wattage. Consult specific device datasheets for actual ratings. The semiconductor die in the TO-220 and TO-247 plastic packages is mounted to a heat conductive metal slug which transfers heat from the back of the package to a metal heatsink, not shown. A thin coating of thermally conductive grease is applied to the metal before mounting the transistor to the heatsink. Since the TO-220 and TO-247 slugs, and the TO-3 case are connected to the collector, it is sometimes necessary to electrically isolate these from a grounded heatsink by an interposed mica or polymer washer. The datasheet ratings for the power packages are only valid when mounted to a heatsink. Without a heatsink, a TO-220 dissipates approximately 1 watt safely in free air.
Datasheet maximum power dissipation ratings are difficult to achieve in practice. The maximum power dissipation is based on a heatsink maintaining the transistor case at no more than 25oC. This is difficult with an air cooled heatsink. The allowable power dissipation decreases with increasing temperature. This is known as derating. Many power device datasheets include a dissipation versus case temperature graph.
Review
• Power dissipation: maximum allowable power dissipation on a sustained basis.
• Reverse voltages: maximum allowable VCE , VCB , VEB .
• Collector current: the maximum allowable collector current.
• Saturation voltage is the VCE voltage drop in a saturated (fully conducting) transistor.
• Beta: β=IC/IB
• Alpha: α=IC/IE α= β/(β+1)
• TransistorPackages are a major factor in power dissipation. Larger packages dissipate more power. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.15%3A_Transistor_Ratings_and_Packages_%28BJT%29.txt |
An ideal transistor would show 0% distortion in amplifying a signal. Its gain would extend to all frequencies. It would control hundreds of amperes of current, at hundreds of degrees C. In practice, available devices show distortion. Amplification is limited at the high frequency end of the spectrum. Real parts only handle tens of amperes with precautions. Care must be taken when paralleling transistors for higher current. Operation at elevated temperatures can destroy transistors if precautions are not taken.
Nonlinearity
The class A common-emitter amplifier (similar to Figure previous)is driven almost to clipping in Figure below. Note that the positive peak is flatter than the negative peaks. This distortion is unacceptable in many applications like high-fidelity audio.
Distortion in large signal common-emitter amplifier.
Small signal amplifiers are relatively linear because they use a small linear section of the transistor characteristics. Large signal amplifiers are not 100% linear because transistor characteristics like β are not constant, but vary with collector current. β is high at low collector current, and low at very low current or high current. Though, we primarily encounter decreasing β with increasing collector current.
SPICE net list: for transient and fourier analyses. Fourier analysis shows 10% total harmonic distortion (THD).
The SPICE listing in Table above illustrates how to quantify the amount of distortion. The “.fourier 2000 v(2)” command tells SPICE to perm a fourier analysis at 2000 Hz on the output v(2). At the command line “spice -b circuitname.cir” produces the Fourier analysis output in Table above. It shows THD (total harmonic distortion) of over 10%, and the contribution of the individual harmonics.
A partial solution to this distortion is to decrease the collector current or operate the amplifier over a smaller portion of the load line. The ultimate solution is to apply negative feedback. See Feedback.
Temperature drift
Temperature affects the AC and DC characteristics of transistors. The two aspects to this problem are environmental temperature variation and self-heating. Some applications, like military and automotive, require operation over an extended temperature range. Circuits in a benign environment are subject to self-heating, in particular high power circuits.
Leakage current ICO and β increase with temperature. The DC β hFE increases exponentially. The AC β hfeincreases, but not as rapidly. It doubles over the range of -55o to 85o C. As temperature increases, the increase in hfe will yield a larger common-emitter output, which could be clipped in extreme cases. The increase in hFE shifts the bias point, possibly clipping one peak. The shift in bias point is amplified in multi-stage direct-coupled amplifiers. The solution is some form of negative feedback to stabilize the bias point. This also stabilizes AC gain.
Increasing temperature in Figure below (a) will decrease VBE from the nominal 0.7V for silicon transistors. Decreasing VBE increases collector current in a common-emitter amplifier, further shifting the bias point. The cure for shifting VBE is a pair of transistors configured as a differential amplifier. If both transistors in Figurebelow (b) are at the same temperature, the VBE will track with changing temperature and cancel.
(a) single ended CE amplifier vs (b) differential amplifier with VBE cancellation.
The maximum recommended junction temperature for silicon devices is frequently 125o C. Though, this should be derated for higher reliability. Transistor action ceases beyond 150o C. Silicon carbide and diamond transistors will operate considerably higher.
Thermal runaway
The problem with increasing temperature causing increasing collector current is that more current increase the power dissipated by the transistor which, in turn, increases its temperature. This self-reinforcing cycle is known as thermal run away, which may destroy the transistor. Again, the solution is a bias scheme with some form of negative feedback to stabilize the bias point.
Junction capacitance
Capacitance exists between the terminals of a transistor. The collector-base capacitance CCB and emitter-base capacitance CEB decrease the gain of a common emitter circuit at higher frequencies.
In a common emitter amplifier, the capacitive feedback from collector to base effectively multiplies CCB by β. The amount of negative gain-reducing feedback is related to both current gain, and amount of collector-base capacitance. This is known as the Miller effect.
Noise
The ultimate sensitivity of small signal amplifiers is limited by noise due to random variations in current flow. The two major sources of noise in transistors are shot noise due to current flow of carriers in the base and thermal noise. The source of thermal noise is device resistance and increases with temperature:
Noise in a transistor amplifier is defined in terms of excess noise generated by the amplifier, not that noise amplified from input to output, but that generated within the amplifier. This is determined by measuring the signal to noise ratio (S/N) at the amplifier input and output. The AC voltage output of an amplifier with a small signal input corresponds to S+N, signal plus noise. The AC voltage with no signal in corresponds to noise N. The noise figure F is defined in terms of S/N of amplifier input and output:
The noise figure F for RF (radio frequency) transistors is usually listed on transistor data sheets in decibels, FdB. A good VHF (very high frequency, 30 MHz to 300 Mhz) noise figure is < 1 dB. The noise figure above VHF increases considerable, 20 dB per decade as shown in Figure below.
Small signal transistor noise figure vs Frequency. After Thiele, Figure 11.147 [AGT]
Figure above also shows that noise at low frequencies increases at 10 dB per decade with decreasing frequency. This noise is known as 1/f noise.
Noise figure varies with the transistor type (part number). Small signal RF transistors used at the antenna input of a radio receiver are specifically designed for low noise figure. Noise figure varies with bias current and impedance matching. The best noise figure for a transistor is achieved at lower bias current, and possibly with an impedance mismatch.
Thermal mismatch (problem with paralleling transistors)
If two identical power transistors were paralleled for higher current, one would expect them to share current equally. Because of differences in characteristerics, transistors do not share current equally.
Transistors paralleled for increased power require emitter ballast resistors
It is not practical to select identical transistors. The β for small signal transistors typically has a range of 100-300, power transistors: 20-50. If each one could be matched, one still might run hotter than the other due to environmental conditions. The hotter transistor draws more current resulting in thermal runaway. The solution when paralleling bipolar transistors is to insert emitter resistors known as ballast resistors of less than an ohm. If the hotter transistor draws more current, the voltage drop across the ballast resistor increases— negative feedback. This decreases the current. Mounting all transistors on the same heatsink helps equalize current too.
High frequency effects
The performance of a transistor amplifier is relatively constant, up to a point, as shown by the small signal common-emitter current gain with increasing frequency in Figure below. Beyond that point the performance of a transistor degrades as frequency increases.
Beta cutoff frequency, fT is the frequency at which common-emitter small signal current gain (hfe) falls to unity. (Figure below) A practical amplifier must have a gain >1. Thus, a transistor cannot be used in a practical amplifier at fT. A more usable limit for a transistor is 0.1·fT.
Common-emitter small signal current gain (hfe) vs frequency.
Some RF silicon bipolar transistors are usable as amplifers up to a few GHz. Silicon-germanium devices extend the upper range to 10 GHz.
Alpha cutoff frequency, falpha is the frequency at which the α falls to 0.707 of low frequency α,0 α=0.707α0. Alpha cutoff and beta cutoff are nearly equal: falpha≅fT Beta cutoff fT is the preferred figure of merit of high frequency performance.
fmax is the highest frequency of oscillation possible under the most favorable conditions of bias and impedance matching. It is the frequency at which the power gain is unity. All of the output is fed back to the input to sustain oscillations. fmax is an upper limit for frequency of operation of a transistor as an active device. Though, a practical amplifier would not be usable at fmax.
Miller effect: The high frequency limit for a transistor is related to the junction capacitances. For example a PN2222A has an input capacitance Cobo=9pF and an output capacitance Cibo=25pF from C-B and E-B respectively. [FAR] Although the C-E capacitance of 25 pF seems large, it is less of a factor than the C-B (9pF) capacitance. because of the Miller effect, the C-B capacitance has an effect on the base equivalent to beta times the capacitance in the common-emitter amplifier. Why might this be? A common-emitter amplifier inverts the signal from base to collector. The inverted collector signal fed back to the base opposes the input on the base. The collector signal is beta times larger than the input. For the PN2222A, β=50–300. Thus, the 9pF C-E capacitance looks like 9·50=450pF to 9·300=2700pF.
The solution to the junction capacitance problem is to select a high frequency transistor for wide bandwidth applications— RF (radio frequency) or microwave transistor. The bandwidth can be extended further by using the common-base instead of the common-emitter configuration. The grounded base shields the emitter input from capacitive collector feedback. A two-transistor cascode arrangement will yield the same bandwidth as the common-base, with the higher input impedance of the common-emitter.
Review
• Transistor amplifiers exhibit distortion because of β variation with collector current.
• Ic, VBE, β and junction capacitance vary with temperature.
• An increase in temperature can cause an increase in IC, causing an increase in temperature, a vicious cycle known as thermal runaway.
• Junction capacitance limits high frequency gain of a transistor. The Miller effect makes Ccb look β times larger at the base of a CE amplifier.
• Transistor noise limits the ability to amplify small signals. Noise figure is a figure of merit concerning transistor noise.
• When paralleling power transistors for increased current, insert ballast resistors in series with the emitters to equalize current.
• FT is the absolute upper frequency limit for a CE amplifier, small signal current gain falls to unity, hfe=1.
• Fmax is the upper frequency limit for an oscillator under the most ideal conditions. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/04%3A_Bipolar_Junction_Transistors/4.16%3A_BJT_Quirks.txt |
A transistor is a linear semiconductor device that controls current with the application of a lower-power electrical signal. Transistors may be roughly grouped into two major divisions: bipolar and field-effect. In the last chapter, we studied bipolar transistors, which utilize a small current to control a large current. In this chapter, we’ll introduce the general concept of the field-effect transistor—a device utilizing a small voltage to control current—and then focus on one particular type: the junction field-effect transistor. In the next chapter, we’ll explore another type of field-effect transistor, the insulated gate variety.
All field-effect transistors are unipolar rather than bipolar devices. That is, the main current through them is comprised either of electrons through an N-type semiconductor or holes through a P-type semiconductor. This becomes more evident when a physical diagram of the device is seen:
In a junction field-effect transistor or JFET, the controlled current passes from source to drain, or from drain to source as the case may be. The controlling voltage is applied between the gate and source. Note how the current does not have to cross through a PN junction on its way between source and drain: the path (called a channel) is an uninterrupted block of semiconductor material. In the image just shown, this channel is an N-type semiconductor. P-type channel JFETs are also manufactured:
Generally, N-channel JFETs are more commonly used than P-channel. The reasons for this have to do with obscure details of semiconductor theory, which I’d rather not discuss in this chapter. As with bipolar transistors, I believe the best way to introduce field-effect transistor usage is to avoid theory whenever possible and concentrate instead on operational characteristics. The only practical difference between N- and P-channel JFETs you need to concern yourself with now is biasing of the PN junction formed between the gate material and the channel.
With no voltage applied between gate and source, the channel is a wide-open path for electrons to flow. However, if a voltage is applied between gate and source of such polarity that it reverse-biases the PN junction, the flow between source and drain connections becomes limited or regulated, just as it was for bipolar transistors with a set amount of base current. Maximum gate-source voltage “pinches off” all current through source and drain, thus forcing the JFET into cutoff mode. This behavior is due to the depletion region of the PN junction expanding under the influence of a reverse-bias voltage, eventually occupying the entire width of the channel if the voltage is great enough. This action may be likened to reducing the flow of a liquid through a flexible hose by squeezing it: with enough force, the hose will be constricted enough to completely block the flow.
Note how this operational behavior is exactly opposite of the bipolar junction transistor. Bipolar transistors are normally-off devices: no current through the base, no current through the collector or the emitter. JFETs, on the other hand, are normally-on devices: no voltage applied to the gate allows maximum current through the source and drain. Also, take note that the amount of current allowed through a JFET is determined by a voltage signal rather than a current signal as with bipolar transistors. In fact, with the gate-source PN junction reverse-biased, there should be nearly zero current through the gate connection. For this reason, we classify the JFET as a voltage-controlled device and the bipolar transistor as a current-controlled device.
If the gate-source PN junction is forward-biased with a small voltage, the JFET channel will “open” a little more to allow greater currents through. However, the PN junction of a JFET is not built to handle any substantial current itself, and thus it is not recommended to forward-bias the junction under any circumstances.
This is a very condensed overview of JFET operation. In the next section, we’ll explore the use of the JFET as a switching device. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/05%3A_Junction_Field-effect_Transistors/5.01%3A_Introduction_to_Junction_Field-effect_Transistors_%28JFET%29.txt |
Like its bipolar cousin, the field-effect transistor may be used as an on/off switch controlling electrical power to a load. Let’s begin our investigation of the JFET as a switch with our familiar switch/lamp circuit:
Remembering that the controlled current in a JFET flows between source and drain, we substitute the source and drain connections of a JFET for the two ends of the switch in the above circuit:
If you haven’t noticed by now, the source and drain connections on a JFET look identical on the schematic symbol. Unlike the bipolar junction transistor where the emitter is clearly distinguished from the collector by the arrowhead, a JFET’s source and drain lines both run perpendicular into the bar representing the semiconductor channel. This is no accident, as the source and drain lines of a JFET are often interchangeable in practice! In other words, JFETs are usually able to handle channel current in either direction, from source to drain or from drain to source.
Now, all we need in the circuit is a way to control the JFET’s conduction. With zero applied voltage between gate and source, the JFET’s channel will be “open,” allowing full current to the lamp. In order to turn the lamp off, we will need to connect another source of DC voltage between the gate and source connections of the JFET like this:
Closing this switch will “pinch off” the JFET’s channel, thus forcing it into cutoff and turning the lamp off:
Note that there is no current going through the gate. As a reverse-biased PN junction, it firmly opposes the flow of any electrons through it. As a voltage-controlled device, the JFET requires negligible input current. This is an advantageous trait of the JFET over the bipolar transistor: there is virtually zero power required of the controlling signal.
Opening the control switch again should disconnect the reverse-biasing DC voltage from the gate, thus allowing the transistor to turn back on. Ideally, anyway, this is how it works. In practice this may not work at all:
Why is this? Why doesn’t the JFET’s channel open up again and allow lamp current through like it did before with no voltage applied between gate and source? The answer lies in the operation of the reverse-biased gate-source junction. The depletion region within that junction acts as an insulating barrier separating gate from source. As such, it possesses a certain amount of capacitance capable of storing an electric charge potential. After this junction has been forcibly reverse-biased by the application of an external voltage, it will tend to hold that reverse-biasing voltage as a stored charge even after the source of that voltage has been disconnected. What is needed to turn the JFET on again is to bleed off that stored charge between the gate and source through a resistor:
This resistor’s value is not very important. The capacitance of the JFET’s gate-source junction is very small, and so even a rather high-value bleed resistor creates a fast RC time constant, allowing the transistor to resume conduction with little delay once the switch is opened.
Like the bipolar transistor, it matters little where or what the controlling voltage comes from. We could use a solar cell, thermocouple, or any other sort of voltage-generating device to supply the voltage controlling the JFET’s conduction. All that is required of a voltage source for JFET switch operation is sufficient voltage to achieve pinch-off of the JFET channel. This level is usually in the realm of a few volts DC, and is termed the pinch-off or cutoff voltage. The exact pinch-off voltage for any given JFET is a function of its unique design, and is not a universal figure like 0.7 volts is for a silicon BJT’s base-emitter junction voltage.
Review
• Field-effect transistors control the current between source and drain connections by a voltage applied between the gate and source. In a junction field-effect transistor (JFET), there is a PN junction between the gate and source which is normally reverse-biased for control of source-drain current.
• JFETs are normally-on (normally-saturated) devices. The application of a reverse-biasing voltage between gate and source causes the depletion region of that junction to expand, thereby “pinching off” the channel between source and drain through which the controlled current travels.
• It may be necessary to attach a “bleed-off” resistor between gate and source to discharge the stored charge built up across the junction’s natural capacitance when the controlling voltage is removed. Otherwise, a charge may remain to keep the JFET in cutoff mode even after the voltage source has been disconnected.
5.03: Meter Check of a Transistor (JFET)
Testing a JFET with a multimeter might seem to be a relatively easy task, seeing as how it has only one PN junction to test: either measured between gate and source, or between gate and drain.
Testing continuity through the drain-source channel is another matter, though. Remember from the last section how a stored charge across the capacitance of the gate-channel PN junction could hold the JFET in a pinched-off state without any external voltage being applied across it? This can occur even when you’re holding the JFET in your hand to test it! Consequently, any meter reading of continuity through that channel will be unpredictable, since you don’t necessarily know if a charge is being stored by the gate-channel junction. Of course, if you know beforehand which terminals on the device are the gate, source, and drain, you may connect a jumper wire between gate and source to eliminate any stored charge and then proceed to test source-drain continuity with no problem. However, if you don’t know which terminals are which, the unpredictability of the source-drain connection may confuse your determination of terminal identity.
A good strategy to follow when testing a JFET is to insert the pins of the transistor into anti-static foam (the material used to ship and store static-sensitive electronic components) just prior to testing. The conductivity of the foam will make a resistive connection between all terminals of the transistor when it is inserted. This connection will ensure that all residual voltage built up across the gate-channel PN junction will be neutralized, thus “opening up” the channel for an accurate meter test of source-to-drain continuity.
Since the JFET channel is a single, uninterrupted piece of semiconductor material, there is usually no difference between the source and drain terminals. A resistance check from source to drain should yield the same value as a check from drain to source. This resistance should be relatively low (a few hundred ohms at most) when the gate-source PN junction voltage is zero. By applying a reverse-bias voltage between gate and source, pinch-off of the channel should be apparent by an increased resistance reading on the meter. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/05%3A_Junction_Field-effect_Transistors/5.02%3A_The_Junction_Field-effect_Transistor_%28JFET%29_as_a_Switch.txt |
JFETs, like bipolar transistors, are able to “throttle” current in a mode between cutoff and saturation called the active mode. To better understand JFET operation, let’s set up a SPICE simulation similar to the one used to explore basic bipolar transistor function:
Note that the transistor labeled “Q1” in the schematic is represented in the SPICE netlist as j1. Although all transistor types are commonly referred to as “Q” devices in circuit schematics—just as resistors are referred to by “R” designations, and capacitors by “C”—SPICE needs to be told what type of transistor this is by means of a different letter designation: q for bipolar junction transistors, and j for junction field-effect transistors.
Here, the controlling signal is a steady voltage of 1 volt, applied with negative towards the JFET gate and positive toward the JFET source, to reverse-bias the PN junction. In the first BJT simulation of chapter 4, a constant-current source of 20 µA was used for the controlling signal, but remember that a JFET is a voltage-controlled device, not a current-controlled device like the bipolar junction transistor.
Like the BJT, the JFET tends to regulate the controlled current at a fixed level above a certain power supply voltage, no matter how high that voltage may climb. Of course, this current regulation has limits in real life—no transistor can withstand infinite voltage from a power source—and with enough drain-to-source voltage the transistor will “break down” and drain current will surge. But within normal operating limits the JFET keeps the drain current at a steady level independent of power supply voltage. To verify this, we’ll run another computer simulation, this time sweeping the power supply voltage (V1) all the way to 50 volts:
Sure enough, the drain current remains steady at a value of 100 µA (1.000E-04 amps) no matter how high the power supply voltage is adjusted.
Because the input voltage has control over the constriction of the JFET’s channel, it makes sense that changing this voltage should be the only action capable of altering the current regulation point for the JFET, just like changing the base current on a BJT is the only action capable of altering collector current regulation. Let’s decrease the input voltage from 1 volt to 0.5 volts and see what happens:
As expected, the drain current is greater now than it was in the previous simulation. With less reverse-bias voltage impressed across the gate-source junction, the depletion region is not as wide as it was before, thus “opening” the channel for charge carriers and increasing the drain current figure.
Please note, however, the actual value of this new current figure: 225 µA (2.250E-04 amps). The last simulation showed a drain current of 100 µA, and that was with a gate-source voltage of 1 volt. Now that we’ve reduced the controlling voltage by a factor of 2 (from 1 volt down to 0.5 volts), the drain current increased, but not by the same 2:1 proportion! Let’s reduce our gate-source voltage once more by another factor of 2 (down to 0.25 volts) and see what happens:
With the gate-source voltage set to 0.25 volts, one-half what it was before, the drain current is 306.3 µA. Although this is still an increase over the 225 µA from the prior simulation, it isn’t proportional to the change of the controlling voltage.
To obtain a better understanding of what is going on here, we should run a different kind of simulation: one that keeps the power supply voltage constant and instead varies the controlling (voltage) signal. When this kind of simulation was run on a BJT, the result was a straight-line graph, showing how the input current / output current relationship of a BJT is linear. Let’s see what kind of relationship a JFET exhibits:
This simulation directly reveals an important characteristic of the junction field-effect transistor: the control effect of gate voltage over drain current is nonlinear. Notice how the drain current does not decrease linearly as the gate-source voltage is increased. With the bipolar junction transistor, collector current was directly proportional to base current: output signal proportionately followed input signal. Not so with the JFET! The controlling signal (gate-source voltage) has less and less effect over the drain current as it approaches cutoff. In this simulation, most of the controlling action (75 percent of drain current decrease—from 400 µA to 100 µA) takes place within the first volt of gate-source voltage (from 0 to 1 volt), while the remaining 25 percent of drain current reduction takes another whole volt worth of input signal. Cutoff occurs at 2 volts input.
Linearity is generally important for a transistor because it allows it to faithfully amplify a waveform without distorting it. If a transistor is nonlinear in its input/output amplification, the shape of the input waveform will become corrupted in some way, leading to the production of harmonics in the output signal. The only time linearity is not important in a transistor circuit is when its being operated at the extreme limits of cutoff and saturation (off and on, respectively, like a switch).
A JFET’s characteristic curves display the same current-regulating behavior as for a BJT, and the nonlinearity between gate-to-source voltage and drain current is evident in the disproportionate vertical spacings between the curves:
To better comprehend the current-regulating behavior of the JFET, it might be helpful to draw a model made up of simpler, more common components, just as we did for the BJT:
In the case of the JFET, it is the voltage across the reverse-biased gate-source diode which sets the current regulation point for the pair of constant-current diodes. A pair of opposing constant-current diodes is included in the model to facilitate current in either direction between source and drain, a trait made possible by the unipolar nature of the channel. With no PN junctions for the source-drain current to traverse, there is no polarity sensitivity in the controlled current. For this reason, JFETs are often referred to as bilateral devices.
A contrast of the JFET’s characteristic curves against the curves for a bipolar transistor reveals a notable difference: the linear (straight) portion of each curve’s non-horizontal area is surprisingly long compared to the respective portions of a BJT’s characteristic curves:
A JFET transistor operated in the triode region tends to act very much like a plain resistor as measured from drain to source. Like all simple resistances, its current/voltage graph is a straight line. For this reason, the triode region (non-horizontal) portion of a JFET’s characteristic curve is sometimes referred to as the ohmic region. In this mode of operation where there isn’t enough drain-to-source voltage to bring drain current up to the regulated point, the drain current is directly proportional to the drain-to-source voltage. In a carefully designed circuit, this phenomenon can be used to an advantage. Operated in this region of the curve, the JFET acts like a voltage-controlled resistance rather than a voltage-controlled current regulator, and the appropriate model for the transistor is different:
Here and here alone the rheostat (variable resistor) model of a transistor is accurate. It must be remembered, however, that this model of the transistor holds true only for a narrow range of its operation: when it is extremely saturated (far less voltage applied between drain and source than what is needed to achieve full regulated current through the drain). The amount of resistance (measured in ohms) between drain and source in this mode is controlled by how much reverse-bias voltage is applied between gate and source. The less gate-to-source voltage, the less resistance (steeper line on graph).
Because JFETs are voltage-controlled current regulators (at least when they’re allowed to operate in their active), their inherent amplification factor cannot be expressed as a unitless ratio as with BJTs. In other words, there is no β ratio for a JFET. This is true for all voltage-controlled active devices, including other types of field-effect transistors and even electron tubes. There is, however, an expression of controlled (drain) current to controlling (gate-source) voltage, and it is called transconductance. Its unit is Siemens, the same unit for conductance (formerly known as the mho).
Why this choice of units? Because the equation takes on the general form of current (output signal) divided by voltage (input signal).
Unfortunately, the transconductance value for any JFET is not a stable quantity: it varies significantly with the amount of gate-to-source control voltage applied to the transistor. As we saw in the SPICE simulations, the drain current does not change proportionally with changes in gate-source voltage. To calculate drain current for any given gate-source voltage, there is another equation that may be used. It is obviously nonlinear upon inspection (note the power of 2), reflecting the nonlinear behavior we’ve already experienced in simulation:
Review
• In their active modes, JFETs regulate drain current according to the amount of reverse-bias voltage applied between gate and source, much like a BJT regulates collector current according to base current. The mathematical ratio between drain current (output) and gate-to-source voltage (input) is called transconductance, and it is measured in units of Siemens.
• The relationship between gate-source (control) voltage and drain (controlled) current is nonlinear: as gate-source voltage is decreased, drain current increases exponentially. That is to say, the transconductance of a JFET is not constant over its range of operation.
• In their triode region, JFETs regulate drain-to-source resistance according to the amount of reverse-bias voltage applied between gate and source. In other words, they act like voltage-controlled resistances. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/05%3A_Junction_Field-effect_Transistors/5.04%3A_Active-mode_Operation_%28JFET%29.txt |
As was stated in the last chapter, there is more than one type of field-effect transistor. The junction field-effect transistor, or JFET, uses voltage applied across a reverse-biased PN junction to control the width of that junction’s depletion region, which then controls the conductivity of a semiconductor channel through which the controlled current moves. Another type of field-effect device—the insulated gate field-effect transistor, or IGFET—exploits a similar principle of a depletion region controlling conductivity through a semiconductor channel, but it differs primarily from the JFET in that there is no direct connection between the gate lead and the semiconductor material itself. Rather, the gate lead is insulated from the transistor body by a thin barrier, hence the term insulated gate. This insulating barrier acts like the dielectric layer of a capacitor and allows gate-to-source voltage to influence the depletion region electrostatically rather than by direct connection.
In addition to a choice of N-channel versus P-channel design, IGFETs come in two major types: enhancement and depletion. The depletion type is more closely related to the JFET, so we will begin our study of IGFETs with it.
6.02: Depletion-type IGFETs
Insulated gate field-effect transistors are unipolar devices just like JFETs: that is, the controlled current does not have to cross a PN junction. There is a PN junction inside the transistor, but its only purpose is to provide that nonconducting depletion region which is used to restrict current through the channel.
Here is a diagram of an N-channel IGFET of the “depletion” type:
Notice how the source and drain leads connect to either end of the N channel, and how the gate lead attaches to a metal plate separated from the channel by a thin insulating barrier. That barrier is sometimes made from silicon dioxide (the primary chemical compound found in sand), which is a very good insulator. Due to this Metal (gate) - Oxide (barrier) - Semiconductor (channel) construction, the IGFET is sometimes referred to as a MOSFET. There are other types of IGFET construction, though, and so “IGFET” is the better descriptor for this general class of transistors.
Notice also how there are four connections to the IGFET. In practice, the substrate lead is directly connected to the source lead to make the two electrically common. Usually, this connection is made internally to the IGFET, eliminating the separate substrate connection, resulting in a three-terminal device with a slightly different schematic symbol:
With source and substrate common to each other, the N and P layers of the IGFET end up being directly connected to each other through the outside wire. This connection prevents any voltage from being impressed across the PN junction. As a result, a depletion region exists between the two materials, but it can never be expanded or collapsed. JFET operation is based on the expansion of the PN junction’s depletion region, but here in the IGFET that cannot happen, so IGFET operation must be based on a different effect.
Indeed it is, for when a controlling voltage is applied between gate and source, the conductivity of the channel is changed as a result of the depletion region moving closer to or further away from the gate. In other words, the channel’s effective width changes just as with the JFET, but this change in channel width is due to depletion region displacement rather than depletion region expansion.
In an N-channel IGFET, a controlling voltage applied positive (+) to the gate and negative (-) to the source has the effect of repelling the PN junction’s depletion region, expanding the N-type channel and increasing conductivity:
Reversing the controlling voltage’s polarity has the opposite effect, attracting the depletion region and narrowing the channel, consequently reducing channel conductivity:
The insulated gate allows for controlling voltages of any polarity without danger of forward-biasing a junction, as was the concern with JFETs. This type of IGFET, although its called a “depletion-type,” actually has the capability of having its channel either depleted (channel narrowed) or enhanced (channel expanded). Input voltage polarity determines which way the channel will be influenced.
Understanding which polarity has which effect is not as difficult as it may seem. The key is to consider the type of semiconductor doping used in the channel (N-channel or P-channel?), then relate that doping type to the side of the input voltage source connected to the channel by means of the source lead. If the IGFET is an N-channel and the input voltage is connected so that the positive (+) side is on the gate while the negative (-) side is on the source, the channel will be enhanced as extra electrons build up on the channel side of the dielectric barrier. Think, “negative (-) correlates with N-type, thus enhancing the channel with the right type of charge carrier (electrons) and making it more conductive.” Conversely, if the input voltage is connected to an N-channel IGFET the other way, so that negative (-) connects to the gate while positive (+) connects to the source, free electrons will be “robbed” from the channel as the gate-channel capacitor charges, thus depleting the channel of majority charge carriers and making it less conductive.
For P-channel IGFETs, the input voltage polarity and channel effects follow the same rule. That is to say, it takes just the opposite polarity as an N-channel IGFET to either deplete or enhance:
Illustrating the proper biasing polarities with standard IGFET symbols:
When there is zero voltage applied between gate and source, the IGFET will conduct current between source and drain, but not as much current as it would if it were enhanced by the proper gate voltage. This places the depletion-type, or simply D-type, IGFET in a category of its own in the transistor world. Bipolar junction transistors are normally-off devices: with no base current, they block any current from going through the collector. Junction field-effect transistors are normally-on devices: with zero applied gate-to-source voltage, they allow maximum drain current (actually, you can coax a JFET into greater drain currents by applying a very small forward-bias voltage between gate and source, but this should never be done in practice for risk of damaging its fragile PN junction). D-type IGFETs, however, are normally half-on devices: with no gate-to-source voltage, their conduction level is somewhere between cutoff and full saturation. Also, they will tolerate applied gate-source voltages of any polarity, the PN junction being immune from damage due to the insulating barrier and especially the direct connection between source and substrate preventing any voltage differential across the junction.
Ironically, the conduction behavior of a D-type IGFET is strikingly similar to that of an electron tube of the triode/tetrode/pentode variety. These devices were voltage-controlled current regulators that likewise allowed current through them with zero controlling voltage applied. A controlling voltage of one polarity (grid negative and cathode positive) would diminish conductivity through the tube while a voltage of the other polarity (grid positive and cathode negative) would enhance conductivity. I find it curious that one of the later transistor designs invented exhibits the same basic properties of the very first active (electronic) device.
A few SPICE analyses will demonstrate the current-regulating behavior of D-type IGFETs. First, a test with zero input voltage (gate shorted to source) and the power supply swept from 0 to 50 volts. The graph shows drain current:
As expected for any transistor, the controlled current holds steady at a regulated value over a wide range of power supply voltages. In this case, that regulated point is 10 µA (1.000E-05). Now let’s see what happens when we apply a negative voltage to the gate (with reference to the source) and sweep the power supply over the same range of 0 to 50 volts:
Not surprisingly, the drain current is now regulated at a lower value of 2.5 µA (down from 10 µA with zero input voltage). Now let’s apply an input voltage of the other polarity, to enhance the IGFET:
With the transistor enhanced by the small controlling voltage, the drain current is now at an increased value of 22.5 µA (2.250E-05). It should be apparent from these three sets of voltage and current figures that the relationship of drain current to gate-source voltage is nonlinear just as it was with the JFET. With 1/2 volt of depleting voltage, the drain current is 2.5 µA; with 0 volts input the drain current goes up to 10 µA; and with 1/2 volt of enhancing voltage, the current is at 22.5 µA. To obtain a better understanding of this nonlinearity, we can use SPICE to plot the drain current over a range of input voltage values, sweeping from a negative (depleting) figure to a positive (enhancing) figure, maintaining the power supply voltage of V1 at a constant value:
Just as it was with JFETs, this inherent nonlinearity of the IGFET has the potential to cause distortion in an amplifier circuit, as the input signal will not be reproduced with 100 percent accuracy at the output. Also notice that a gate-source voltage of about 1 volt in the depleting direction is able to pinch off the channel so that there is virtually no drain current. D-type IGFETs, like JFETs, have a certain pinch-off voltage rating. This rating varies with the precise unique of the transistor, and may not be the same as in our simulation here.
Plotting a set of characteristic curves for the IGFET, we see a pattern not unlike that of the JFET: | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/06%3A_Insulated-gate_Field-effect_Transistors/6.01%3A_Introduction_to_Insulated-gate_Field-effect_Transistors.txt |
Because of their insulated gates, IGFETs of all types have extremely high current gain: there can be no sustained gate current if there is no continuous gate circuit in which electrons may continually flow. The only current we see through the gate terminal of an IGFET, then, is whatever transient (brief surge) may be required to charge the gate-channel capacitance and displace the depletion region as the transistor switches from an “on” state to an “off” state, or vice versa.
This high current gain would at first seem to place IGFET technology at a decided advantage over bipolar transistors for the control of very large currents. If a bipolar junction transistor is used to control a large collector current, there must be a substantial base current sourced or sunk by some control circuitry, in accordance with the β ratio. To give an example, in order for a power BJT with a β of 20 to conduct a collector current of 100 amps, there must be at least 5 amps of base current, a substantial amount of current in itself for miniature discrete or integrated control circuitry to handle:
It would be nice from the standpoint of control circuitry to have power transistors with high current gain, so that far less current is needed for control of load current. Of course, we can use Darlington pair transistors to increase the current gain, but this kind of arrangement still requires far more controlling current than an equivalent power IGFET:
Unfortunately, though, IGFETs have problems of their own controlling high current: they typically exhibit greater drain-to-source voltage drop while saturated than the collector-to-emitter voltage drop of a saturated BJT. This greater voltage drop equates to higher power dissipation for the same amount of load current, limiting the usefulness of IGFETs as high-power devices. Although some specialized designs such as the so-called VMOS transistor have been designed to minimize this inherent disadvantage, the bipolar junction transistor is still superior in its ability to switch high currents.
An interesting solution to this dilemma leverages the best features of IGFETs with the best of features of BJTs, in one device called an Insulated-Gate Bipolar Transistor, or IGBT. Also known as an Bipolar-mode MOSFET, a Conductivity-Modulated Field-Effect Transistor (COMFET), or simply as an Insulated-Gate Transistor (IGT), it is equivalent to a Darlington pair of IGFET and BJT:
In essence, the IGFET controls the base current of a BJT, which handles the main load current between collector and emitter. This way, there is extremely high current gain (since the insulated gate of the IGFET draws practically no current from the control circuitry), but the collector-to-emitter voltage drop during full conduction is as low as that of an ordinary BJT.
One disadvantage of the IGBT over a standard BJT is its slower turn-off time. For fast switching and high current-handling capacity, its difficult to beat the bipolar junction transistor. Faster turn-off times for the IGBT may be achieved by certain changes in design, but only at the expense of a higher saturated voltage drop between collector and emitter. However, the IGBT provides a good alternative to IGFETs and BJTs for high-power control applications. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/06%3A_Insulated-gate_Field-effect_Transistors/6.12%3A_IGBTs.txt |
Thyristors are a class of semiconductor components exhibiting hysteresis, that property whereby a system fails to return to its original state after some cause of state change has been removed. A very simple example of hysteresis is the mechanical action of a toggle switch: when the lever is pushed, it flips to one of two extreme states (positions) and will remain there even after the source of motion is removed (after you remove your hand from the switch lever). To illustrate the absence of hysteresis, consider the action of a “momentary” pushbutton switch, which returns to its original state after the button is no longer pressed: when the stimulus is removed (your hand), the system (switch) immediately and fully returns to its prior state with no “latching” behavior.
Bipolar, junction field-effect, and insulated gate field-effect transistors are all non-hysteric devices. That is, these do not inherently “latch” into a state after being stimulated by a voltage or current signal. For any given input signal at any given time, a transistor will exhibit a predictable output response as defined by its characteristic curve. Thyristors, on the other hand, are semiconductor devices that tend to stay “on” once turned on, and tend to stay “off” once turned off. A momentary event is able to flip these devices into either their on or off states where these will remain that way on their own, even after the cause of the state change is taken away. As such, these are useful only as on/off switching devices—much like a toggle switch—and cannot be used as analog signal amplifiers.
Thyristors are constructed using the same technology as bipolar junction transistors, and in fact may be analyzed as circuits comprised of transistor pairs. How then, can a hysteric device (a thyristor) be made from non-hysteric devices (transistors)? The answer to this question is positive feedback, also known as regenerative feedback. As you should recall, feedback is the condition where a percentage of the output signal is “fed back” to the input of an amplifying device. Negative, or degenerative, feedback results in a diminishing of voltage gain with increases in stability, linearity, and bandwidth. Positive feedback, on the other hand, results in a kind of instability where the amplifier’s output tends to “saturate.” In the case of thyristors, this saturating tendency equates to the device “wanting” to stay on once turned on, and off once turned off.
In this chapter we will explore several different kinds of thyristors, most of which stem from a single, basic two-transistor core circuit. Before we do that, though, it would be beneficial to study the technological predecessor to thyristors: gas discharge tubes.
7.02: Gas Discharge Tubes
If you’ve ever witnessed a lightning storm, you’ve seen electrical hysteresis in action (and probably didn’t realize what you were seeing). The action of strong wind and rain accumulates tremendous static electric charges between cloud and earth, and between clouds as well. Electric charge imbalances manifest themselves as high voltages, and when the electrical resistance of air can no longer hold these high voltages at bay, huge surges of current travel between opposing poles of electrical charge which we call “lightning.”
The buildup of high voltages by wind and rain is a fairly continuous process, the rate of charge accumulation increasing under the proper atmospheric conditions. However, lightning bolts are anything but continuous: they exist as relatively brief surges rather than continuous discharges. Why is this? Why don’t we see soft, glowing lightning arcs instead of violently brief lightning bolts? The answer lies in the nonlinear (and hysteric) resistance of air.
Under ordinary conditions, air has an extremely high amount of resistance. It is so high, in fact, that we typically treat its resistance as infinite and electrical conduction through the air as negligible. The presence of water and dust in air lowers its resistance some, but it is still an insulator for most practical purposes. When enough high voltage is applied across a distance of air, though, its electrical properties change: electrons become “stripped” from their normal positions around their respective atoms and are liberated to constitute a current. In this state, air is considered to be ionized and is called a plasma rather than a gas. This usage of the word “plasma” is not to be confused with the medical term (meaning the fluid portion of blood), but is a fourth state of matter, the other three being solid, liquid, and vapor (gas). Plasma is a relatively good conductor of electricity, its specific resistance being much lower than that of the same substance in its gaseous state.
As an electric current moves through the plasma, there is energy dissipated in the plasma in the form of heat, just as current through a solid resistor dissipates energy in the form of heat. In the case of lightning, the temperatures involved are extremely high. High temperatures are also sufficient to convert gaseous air into a plasma or maintain plasma in that state without the presence of high voltage. As the voltage between cloud and earth, or between cloud and cloud, decreases as the charge imbalance is neutralized by the current of the lightning bolt, the heat dissipated by the bolt maintains the air path in a plasma state, keeping its resistance low. The lightning bolt remains a plasma until the voltage decreases to too low a level to sustain enough current to dissipate enough heat. Finally, the air returns to a gaseous state and stops conducting current, thus allowing voltage to build up once more.
Note how throughout this cycle, the air exhibits hysteresis. When not conducting electricity, it tends to remain an insulator until voltage builds up past a critical threshold point. Then, once it changes state and becomes a plasma, it tends to remain a conductor until voltage falls below a lower critical threshold point. Once “turned on” it tends to stay “on,” and once “turned off” it tends to stay “off.” This hysteresis, combined with a steady buildup of voltage due to the electrostatic effects of wind and rain, explains the action of lightning as brief bursts.
In electronic terms, what we have here in the action of lightning is a simple relaxation oscillator. Oscillators are electronic circuits that produce an oscillating (AC) voltage from a steady supply of DC power. A relaxation oscillator is one that works on the principle of a charging capacitor that is suddenly discharged every time its voltage reaches a critical threshold value. One of the simplest relaxation oscillators in existence is comprised of three components (not counting the DC power supply): a resistor, capacitor, and neon lamp in Figure below.
Simple relaxation oscillator
Neon lamps are nothing more than two metal electrodes inside a sealed glass bulb, separated by the neon gas inside. At room temperatures and with no applied voltage, the lamp has nearly infinite resistance. However, once a certain threshold voltage is exceeded (this voltage depends on the gas pressure and geometry of the lamp), the neon gas will become ionized (turned into a plasma) and its resistance dramatically reduced. In effect, the neon lamp exhibits the same characteristics as air in a lightning storm, complete with the emission of light as a result of the discharge, albeit on a much smaller scale.
The capacitor in the relaxation oscillator circuit shown above charges at an inverse exponential rate determined by the size of the resistor. When its voltage reaches the threshold voltage of the lamp, the lamp suddenly “turns on” and quickly discharges the capacitor to a low voltage value. Once discharged, the lamp “turns off” and allows the capacitor to build up a charge once more. The result is a series of brief flashes of light from the lamp, the rate of which is dictated by battery voltage, resistor resistance, capacitor capacitance, and lamp threshold voltage.
While gas-discharge lamps are more commonly used as sources of illumination, their hysteric properties were leveraged in slightly more sophisticated variants known as thyratron tubes. Essentially a gas-filled triode tube (a triode being a three-element vacuum electron tube performing much a similar function to the N-channel, D-type IGFET), the thyratron tube could be turned on with a small control voltage applied between grid and cathode, and turned off by reducing the plate-to-cathode voltage.
Simple thyratron control circuit
In essence, thyratron tubes were controlled versions of neon lamps built specifically for switching current to a load. The dot inside the circle of the schematic symbol indicates a gas fill, as opposed to the hard vacuum normally seen in other electron tube designs. In the circuit shown above in Figure above. the thyratron tube allows current through the load in one direction (note the polarity across the load resistor) when triggered by the small DC control voltage connected between grid and cathode. Note that the load’s power source is AC, which provides a clue about how the thyratron turns off after its been triggered on: since AC voltage periodically passes through a condition of 0 volts between half-cycles, the current through an AC-powered load must also periodically halt. This brief pause of current between half-cycles gives the tube’s gas time to cool, letting it return to its normal “off” state. Conduction may resume only if enough voltage is applied by the AC power source (some other time in the wave’s cycle) and if the DC control voltage allows it.
An oscilloscope display of load voltage in such a circuit would look something like Figure below.
Thyratron waveforms
As the AC supply voltage climbs from zero volts to its first peak, the load voltage remains at zero (no load current) until the threshold voltage is reached. At that point, the tube switches “on” and begins to conduct, the load voltage now following the AC voltage through the rest of the half cycle. Load voltage exists (and thus load current) even when the AC voltage waveform has dropped below the threshold value of the tube. This is hysteresis at work: the tube stays in its conductive mode past the point where it first turned on, continuing to conduct until there the supply voltage drops off to almost zero volts. Because thyratron tubes are one-way (diode) devices, no voltage develops across the load through the negative half-cycle of AC. In practical thyratron circuits, multiple tubes arranged in some form of full-wave rectifier circuit to facilitate full-wave DC power to the load.
The thyratron tube has been applied to a relaxation oscillator circuit. [VTS] The frequency is controlled by a small DC voltage between grid and cathode. (See Figure below) This voltage-controlled oscillator is known as a VCO. Relaxation oscillators produce a very non-sinusoidal output, and they exist mostly as demonstration circuits (as is the case here) or in applications where the harmonic rich waveform is desirable. [MET]
Voltage controlled thyratron relaxation oscillator
I speak of thyratron tubes in the past tense for good reason: modern semiconductor components have obsoleted thyratron tube technology for all but a few very special applications. It is no coincidence that the word thyristor bears so much similarity to the word thyratron, for this class of semiconductor components does much the same thing: use hysteretically switch current on and off. It is these modern devices that we now turn our attention to.
Review
• Electrical hysteresis, the tendency for a component to remain “on” (conducting) after it begins to conduct and to remain “off” (nonconducting) after it ceases to conduct, helps to explain why lightning bolts exist as momentary surges of current rather than continuous discharges through the air.
• Simple gas-discharge tubes such as neon lamps exhibit electrical hysteresis.
• More advanced gas-discharge tubes have been made with control elements so that their “turn-on” voltage could be adjusted by an external signal. The most common of these tubes was called the thyratron.
• Simple oscillator circuits called relaxation oscillators may be created with nothing more than a resistor-capacitor charging network and a hysteretic device connected across the capacitor. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/07%3A_Thyristors/7.01%3A_Hysteresis.txt |
Our exploration of thyristors begins with a device called the four-layer diode, also known as a PNPN diode, or a Shockley diode after its inventor, William Shockley. This is not to be confused with a Schottky diode, that two-layer metal-semiconductor device known for its high switching speed. A crude illustration of the Shockley diode, often seen in textbooks, is a four-layer sandwich of P-N-P-N semiconductor material, Figure below.
Shockley or 4-layer diode
Unfortunately, this simple illustration does nothing to enlighten the viewer on how it works or why. Consider an alternative rendering of the device’s construction in Figure below.
Transistor equivalent of Shockley diode
Shown like this, it appears to be a set of interconnected bipolar transistors, one PNP and the other NPN. Drawn using standard schematic symbols, and respecting the layer doping concentrations not shown in the last image, the Shockley diode looks like this (Figure below)
Shockley diode: physical diagram, equivalent schematic diagram, and schematic symbol.
Let’s connect one of these devices to a source of variable voltage and see what happens: (Figure below)
Powered Shockley diode equivalent circuit.
With no voltage applied, of course there will be no current. As voltage is initially increased, there will still be no current because neither transistor is able to turn on: both will be in cutoff mode. To understand why this is, consider what it takes to turn a bipolar junction transistor on: current through the base-emitter junction. As you can see in the diagram, base current through the lower transistor is controlled by the upper transistor, and the base current through the upper transistor is controlled by the lower transistor. In other words, neither transistor can turn on until the other transistor turns on. What we have here, in vernacular terms, is known as a Catch-22.
So how can a Shockley diode ever conduct current, if its constituent transistors stubbornly maintain themselves in a state of cutoff? The answer lies in the behavior of real transistors as opposed to idealtransistors. An ideal bipolar transistor will never conduct collector current if no base current flows, no matter how much or little voltage we apply between collector and emitter. Real transistors, on the other hand, have definite limits to how much collector-emitter voltage each can withstand before one breaks down and conduct. If two real transistors are connected in this fashion to form a Shockley diode, each one will conduct if sufficient voltage is applied by the battery between anode and cathode to cause one of them to break down. Once one transistor breaks down and begins to conduct, it will allow base current through the other transistor, causing it to turn on in a normal fashion, which then allows base current through the first transistor. The end result is that both transistors will be saturated, now keeping each other turned on instead of off.
So, we can force a Shockley diode to turn on by applying sufficient voltage between anode and cathode. As we have seen, this will inevitably cause one of the transistors to turn on, which then turns the other transistor on, ultimately “latching” both transistors on where each will tend to remain. But how do we now get the two transistors to turn off again? Even if the applied voltage is reduced to a point well below what it took to get the Shockley diode conducting, it will remain conducting because both transistors now have base current to maintain regular, controlled conduction. The answer to this is to reduce the applied voltage to a much lower point where too little current flows to maintain transistor bias, at which point one of the transistors will cutoff, which then halts base current through the other transistor, sealing both transistors in the “off” state as each one was before any voltage was applied at all.
If we graph this sequence of events and plot the results on an I/V graph, the hysteresis is evident. First, we will observe the circuit as the DC voltage source (battery) is set to zero voltage: (Figure below)
Zero applied voltage; zero current
Next, we will steadily increase the DC voltage. Current through the circuit is at or nearly at zero, as the breakdown limit has not been reached for either transistor: (Figure below)
Some applied voltage; still no current
When the voltage breakdown limit of one transistor is reached, it will begin to conduct collector current even though no base current has gone through it yet. Normally, this sort of treatment would destroy a bipolar junction transistor, but the PNP junctions comprising a Shockley diode are engineered to take this kind of abuse, similar to the way a Zener diode is built to handle reverse breakdown without sustaining damage. For the sake of illustration I’ll assume the lower transistor breaks down first, sending current through the base of the upper transistor: (Figure below)
More voltage applied; lower transistor breaks down
As the upper transistor receives base current, it turns on as expected. This action allows the lower transistor to conduct normally, the two transistors “sealing” themselves in the “on” state. Full current is quickly seen in the circuit: (Figure below)
Transistors are now fully conducting.
The positive feedback mentioned earlier in this chapter is clearly evident here. When one transistor breaks down, it allows current through the device structure. This current may be viewed as the “output” signal of the device. Once an output current is established, it works to hold both transistors in saturation, thus ensuring the continuation of a substantial output current. In other words, an output current “feeds back” positively to the input (transistor base current) to keep both transistors in the “on” state, thus reinforcing (or regenerating) itself.
With both transistors maintained in a state of saturation with the presence of ample base current, each will continue to conduct even if the applied voltage is greatly reduced from the breakdown level. The effect of positive feedback is to keep both transistors in a state of saturation despite the loss of input stimulus (the original, high voltage needed to break down one transistor and cause a base current through the other transistor): (Figure below)
Current maintained even when voltage is reduced
If the DC voltage source is turned down too far, though, the circuit will eventually reach a point where there isn’t enough current to sustain both transistors in saturation. As one transistor passes less and less collector current, it reduces the base current for the other transistor, thus reducing base current for the first transistor. The vicious cycle continues rapidly until both transistors fall into cutoff: (Figure below)
If voltage drops too low, both transistors shut off.
Here, positive feedback is again at work: the fact that the cause/effect cycle between both transistors is “vicious” (a decrease in current through one works to decrease current through the other, further decreasing current through the first transistor) indicates a positive relationship between output (controlled current) and input (controlling current through the transistors’ bases).
The resulting curve on the graph is classically hysteretic: as the input signal (voltage) is increased and decreased, the output (current) does not follow the same path going down as it did going up: (Figure below)
Hysteretic curve
Put in simple terms, the Shockley diode tends to stay on once its turned on, and stay off once its turned off. No “in-between” or “active” mode in its operation: it is a purely on or off device, as are all thyristors.
A few special terms apply to Shockley diodes and all other thyristor devices built upon the Shockley diode foundation. First is the term used to describe its “on” state: latched. The word “latch” is reminiscent of a door lock mechanism, which tends to keep the door closed once it has been pushed shut. The term firing refers to the initiation of a latched state. To get a Shockley diode to latch, the applied voltage must be increased until breakover is attained. Though this action is best described as transistor breakdown, the term breakoveris used instead because the result is a pair of transistors in mutual saturation rather than destruction of the transistor. A latched Shockley diode is re-set back into its nonconducting state by reducing current through it until low-current dropout occurs.
Note that Shockley diodes may be fired in a way other than breakover: excessive voltage rise, or dv/dt. If the applied voltage across the diode increases at a high rate of change, it may trigger. This is able to cause latching (turning on) of the diode due to inherent junction capacitances within the transistors. Capacitors, as you may recall, oppose changes in voltage by drawing or supplying current. If the applied voltage across a Shockley diode rises at too fast a rate, those tiny capacitances will draw enough current during that time to activate the transistor pair, turning them both on. Usually, this form of latching is undesirable, and can be minimized by filtering high-frequency (fast voltage rises) from the diode with series inductors and parallel resistor-capacitor networks called snubbers: (Figure below)
Both the series inductor and parallel resistor-capacitor “snubber” circuit help minimize the Shockley diode’s exposure to excessively rising voltage.
The voltage rise limit of a Shockley diode is referred to as the critical rate of voltage rise. Manufacturers usually provide this specification for the devices they sell.
Review
• Shockley diodes are four-layer PNPN semiconductor devices. These behave as a pair of interconnected PNP and NPN transistors.
• Like all thyristors, Shockley diodes tend to stay on once turned on (latched), and stay off once turned off.
• To latch a Shockley diode exceed the anode-to-cathode breakover voltage, or exceed the anode-to-cathode critical rate of voltage rise.
• To cause a Shockley diode to stop conducting, reduce the current going through it to a level below its low-current dropout threshold.
7.04: The DIAC
Like all diodes, Shockley diodes are unidirectional devices; that is, these only conduct current in one direction. If bidirectional (AC) operation is desired, two Shockley diodes may be joined in parallel facing different directions to form a new kind of thyristor, the DIAC: (Figure below)
The DIAC
A DIAC operated with a DC voltage across it behaves exactly the same as a Shockley diode. With AC, however, the behavior is different from what one might expect. Because alternating current repeatedly reverses direction, DIACs will not stay latched longer than one-half cycle. If a DIAC becomes latched, it will continue to conduct current only as long as voltage is available to push enough current in that direction. When the AC polarity reverses, as it must twice per cycle, the DIAC will drop out due to insufficient current, necessitating another breakover before it conducts again. The result is the current waveform in Figure below.
DIAC waveforms
DIACs are almost never used alone, but in conjunction with other thyristor devices. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/07%3A_Thyristors/7.03%3A_The_Shockley_Diode.txt |
Shockley Diodes and Silicon Controlled Rectifiers (SCRs)
Shockley diodes are curious devices, but rather limited in application. Their usefulness may be expanded, however, by equipping them with another means of latching. In doing so, each becomes true amplifying devices (if only in an on/off mode), and we refer to these as silicon-controlled rectifiers or SCRs.
The progression from Shockley diode to SCR is achieved with one small addition, actually nothing more than a third wire connection to the existing PNPN structure: (Figure below)
The Silicon-Controlled Rectifier (SCR)
SCR Conduction
If an SCR’s gate is left floating (disconnected), it behaves exactly as a Shockley diode. It may be latched by breakover voltage or by exceeding the critical rate of voltage rise between anode and cathode, just as with the Shockley diode. Dropout is accomplished by reducing current until one or both internal transistors fall into cutoff mode, also like the Shockley diode. However, because the gate terminal connects directly to the base of the lower transistor, it may be used as an alternative means to latch the SCR. By applying a small voltage between gate and cathode, the lower transistor will be forced on by the resulting base current, which will cause the upper transistor to conduct, which then supplies the lower transistor’s base with current so that it no longer needs to be activated by a gate voltage. The necessary gate current to initiate latch-up, of course, will be much lower than the current through the SCR from cathode to anode, so the SCR does achieve a measure of amplification.
Triggering/Firing
This method of securing SCR conduction is called triggering or firing, and it is by far the most common way that SCRs are latched in actual practice. In fact, SCRs are usually chosen so that their breakover voltage is far beyond the greatest voltage expected to be experienced from the power source so that it can be turned on only by an intentional voltage pulse applied to the gate.
Reverse Triggering
It should be mentioned that SCRs may sometimes be turned off by directly shorting their gate and cathode terminals together, or by “reverse-triggering” the gate with a negative voltage (in reference to the cathode), so that the lower transistor is forced into cutoff. I say this is “sometimes” possible because it involves shunting all of the upper transistor’s collector current past the lower transistor’s base. This current may be substantial, making triggered shut-off of an SCR difficult at best. A variation of the SCR, called a Gate-Turn-Off thyristor, or GTO, makes this task easier. But even with a GTO, the gate current required to turn it off may be as much as 20% of the anode (load) current! The schematic symbol for a GTO is shown in the following illustration: (Figure below)
The Gate Turn-Off thyristor (GTO)
SCRs vs GTOs
SCRs and GTOs share the same equivalent schematics (two transistors connected in a positive-feedback fashion), the only differences being details of construction designed to grant the NPN transistor a greater β than the PNP. This allows a smaller gate current (forward or reverse) to exert a greater degree of control over conduction from cathode to anode, with the PNP transistor’s latched state being more dependent upon the NPN’s than vice versa. The Gate-Turn-Off thyristor is also known by the name of Gate-Controlled Switch, or GCS.
Testing SCR Functionality with an Ohmmeter
A rudimentary test of SCR function, or at least terminal identification, may be performed with an ohmmeter. Because the internal connection between gate and cathode is a single PN junction, a meter should indicate continuity between these terminals with the red test lead on the gate and the black test lead on the cathode like this: (Figure below)
Rudimentary test of SCR
All other continuity measurements performed on an SCR will show “open” (“OL” on some digital multimeter displays). It must be understood that this test is very crude and does not constitute a comprehensive assessment of the SCR. It is possible for an SCR to give good ohmmeter indications and still be defective. Ultimately, the only way to test an SCR is to subject it to a load current.
If you are using a multimeter with a “diode check” function, the gate-to-cathode junction voltage indication you get may or may not correspond to what’s expected of a silicon PN junction (approximately 0.7 volts). In some cases, you will read a much lower junction voltage: mere hundredths of a volt. This is due to an internal resistor connected between the gate and cathode incorporated within some SCRs. This resistor is added to make the SCR less susceptible to false triggering by spurious voltage spikes, from circuit “noise” or from static electric discharge. In other words, having a resistor connected across the gate-cathode junction requires that a strong triggering signal (substantial current) be applied to latch the SCR. This feature is often found in larger SCRs, not on small SCRs. Bare in mind that an SCR with an internal resistor connected between gate and cathode will indicate continuity in both directions between those two terminals: (Figure below)
Larger SCRs have gate to cathode resistor.
Sensitive Gate SCRs
“Normal” SCRs, lacking this internal resistor, are sometimes referred to as sensitive gate SCRs due to their ability to be triggered by the slightest positive gate signal.
The test circuit for an SCR is both practical as a diagnostic tool for checking suspected SCRs and also an excellent aid to understanding basic SCR operation. A DC voltage source is used for powering the circuit, and two pushbutton switches are used to latch and unlatch the SCR, respectively: (Figure below)
SCR testing circuit
Actuating the normally-open “on” pushbutton switch connects the gate to the anode, allowing current from the negative terminal of the battery, through the cathode-gate PN junction, through the switch, through the load resistor, and back to the battery. This gate current should force the SCR to latch on, allowing current to go directly from cathode to anode without further triggering through the gate. When the “on” pushbutton is released, the load should remain energized.
Pushing the normally-closed “off” pushbutton switch breaks the circuit, forcing current through the SCR to halt, thus forcing it to turn off (low-current dropout).
Holding Current
If the SCR fails to latch, the problem may be with the load and not the SCR. A certain minimum amount of load current is required to hold the SCR latched in the “on” state. This minimum current level is called the holding current. A load with too great a resistance value may not draw enough current to keep an SCR latched when gate current ceases, thus giving the false impression of a bad (unlatchable) SCR in the test circuit. Holding current values for different SCRs should be available from the manufacturers. Typical holding current values range from 1 milliamp to 50 milliamps or more for larger units.
For the test to be fully comprehensive, more than the triggering action needs to be tested. The forward breakover voltage limit of the SCR could be tested by increasing the DC voltage supply (with no pushbuttons actuated) until the SCR latches all on its own. Beware that a breakover test may require very high voltage: many power SCRs have breakover voltage ratings of 600 volts or more! Also, if a pulse voltage generator is available, the critical rate of voltage rise for the SCR could be tested in the same way: subject it to pulsing supply voltages of different V/time rates with no pushbutton switches actuated and see when it latches.
In this simple form, the SCR test circuit could suffice as a start/stop control circuit for a DC motor, lamp, or other practical load: (Figure below)
DC motor start/stop control circuit
The “Crowbar” Circuit
Another practical use for the SCR in a DC circuit is as a crowbar device for overvoltage protection. A “crowbar” circuit consists of an SCR placed in parallel with the output of a DC power supply, for placing a direct short-circuit on the output of that supply to prevent excessive voltage from reaching the load. Damage to the SCR and power supply is prevented by the judicious placement of a fuse or substantial series resistance ahead of the SCR to limit short-circuit current: (Figure below)
Crowbar circuit used in DC power supply
Some device or circuit sensing the output voltage will be connected to the gate of the SCR, so that when an overvoltage condition occurs, voltage will be applied between the gate and cathode, triggering the SCR and forcing the fuse to blow. The effect will be approximately the same as dropping a solid steel crowbar directly across the output terminals of the power supply, hence the name of the circuit.
Most applications of the SCR are for AC power control, despite the fact that SCRs are inherently DC (unidirectional) devices. If bidirectional circuit current is required, multiple SCRs may be used, with one or more facing each direction to handle current through both half-cycles of the AC wave. The primary reason SCRs are used at all for AC power control applications is the unique response of a thyristor to an alternating current. As we saw, the thyratron tube (the electron tube version of the SCR) and the DIAC, a hysteretic device triggered on during a portion of an AC half-cycle will latch and remain on throughout the remainder of the half-cycle until the AC current decreases to zero, as it must to begin the next half-cycle. Just prior to the zero-crossover point of the current waveform, the thyristor will turn off due to insufficient current (this behavior is also known as natural commutation) and must be fired again during the next cycle. The result is a circuit current equivalent to a “chopped up” sine wave. For review, here is the graph of a DIAC’s response to an AC voltage whose peak exceeds the breakover voltage of the DIAC: (Figure below)
DIAC bidirectional response
With the DIAC, that breakover voltage limit was a fixed quantity. With the SCR, we have control over exactly when the device becomes latched by triggering the gate at any point in time along the waveform. By connecting a suitable control circuit to the gate of an SCR, we can “chop” the sine wave at any point to allow for time-proportioned power control to a load.
Take the circuit in Figure below as an example. Here, an SCR is positioned in a circuit to control power to a load from an AC source.
SCR control of AC power
Being a unidirectional (one-way) device, at most, we can only deliver half-wave power to the load, in the half-cycle of AC where the supply voltage polarity is positive on the top and negative on the bottom. However, for demonstrating the basic concept of time-proportional control, this simple circuit is better than one controlling full-wave power (which would require two SCRs).
With no triggering to the gate, and the AC source voltage well below the SCR’s breakover voltage rating, the SCR will never turn on. Connecting the SCR gate to the anode through a standard rectifying diode (to prevent reverse current through the gate in the event of the SCR containing a built-in gate-cathode resistor), will allow the SCR to be triggered almost immediately at the beginning of every positive half-cycle: (Figure below)
Gate connected directly to anode through a diode; nearly complete half-wave current through load.
SCR Trigger Delay
We can delay the triggering of the SCR, however, by inserting some resistance into the gate circuit, thus increasing the amount of voltage drop required before enough gate current triggers the SCR. In other words, if we make it harder for electrons to flow through the gate by adding a resistance, the AC voltage will have to reach a higher point in its cycle before there will be enough gate current to turn the SCR on. The result is in Figure below.
Resistance inserted in gate circuit; less than half-wave current through load.
With the half-sine wave chopped up to a greater degree by a delayed triggering of the SCR, the load receives less average power (power is delivered for less time throughout a cycle). By making the series gate resistor variable, we can make adjustments to the time-proportioned power: (Figure below)
Increasing the resistance raises the threshold level, causing less power to be delivered to the load. Decreasing the resistance lowers the threshold level, causing more power to be delivered to the load.
Unfortunately, this control scheme has a significant limitation. In using the AC source waveform for our SCR triggering signal, we limit control to the first half of the waveform’s half-cycle. In other words, it is not possible for us to wait until after the wave’s peak to trigger the SCR. This means we can turn down the power only to the point where the SCR turns on at the very peak of the wave: (Figure below)
Circuit at minimum power setting
Raising the trigger threshold any more will cause the circuit to not trigger at all since not even the peak of the AC power voltage will be enough to trigger the SCR. The result will be no power to the load.
An ingenious solution to this control dilemma is found in the addition of a phase-shifting capacitor to the circuit: (Figure below)
Addition of a phase-shifting capacitor to the circuit
The smaller waveform shown on the graph is the voltage across the capacitor. For the sake of illustrating the phase shift, I’m assuming a condition of maximum control resistance where the SCR is not triggering at all with no load current, save for what little current goes through the control resistor and capacitor. This capacitor voltage will be phase-shifted anywhere from 0o to 90o lagging behind the power source AC waveform. When this phase-shifted voltage reaches a high enough level, the SCR will trigger.
With enough voltage across the capacitor to periodically trigger the SCR, the resulting load current waveform will look something like Figure below)
Phase-shifted signal triggers SCR into conduction.
Because the capacitor waveform is still rising after the main AC power waveform has reached its peak, it becomes possible to trigger the SCR at a threshold level beyond that peak, thus chopping the load current wave further than it was possible with the simpler circuit. In reality, the capacitor voltage waveform is a bit more complex than what is shown here, its sinusoidal shape distorted every time the SCR latches on. However, what I’m trying to illustrate here is the delayed triggering action gained with the phase-shifting RC network; thus, a simplified, undistorted waveform serves the purpose well.
SCR triggering by Complex Circuits
SCRs may also be triggered, or “fired,” by more complex circuits. While the circuit previously shown is sufficient for a simple application like a lamp control, large industrial motor controls often rely on more sophisticated triggering methods. Sometimes, pulse transformers are used to couple a triggering circuit to the gate and cathode of an SCR to provide electrical isolation between the triggering and power circuits: (Figure below)
Transformer coupling of trigger signal provides isolation.
When multiple SCRs are used to control power, their cathodes are often not electrically common, making it difficult to connect a single triggering circuit to all SCRs equally. An example of this is the controlled bridge rectifier shown in Figure below.
Controlled bridge rectifier
In any bridge rectifier circuit, the rectifying diodes (in this example, the rectifying SCRs) must conduct in opposite pairs. SCR1 and SCR3 must be fired simultaneously, and SCR2 and SCR4 must be fired together as a pair. As you will notice, though, these pairs of SCRs do not share the same cathode connections, meaning that it would not work to simply parallel their respective gate connections and connect a single voltage source to trigger both: (Figure below)
This strategy will not work for triggering SCR2 and SCR4 as a pair.
Although the triggering voltage source shown will trigger SCR4, it will not trigger SCR2 properly because the two thyristors do not share a common cathode connection to reference that triggering voltage. Pulse transformers connecting the two thyristor gates to a common triggering voltage source will work, however: (Figure below)
Transformer coupling of the gates allows triggering of SCR2 and SCR4 .
Bear in mind that this circuit only shows the gate connections for two out of the four SCRs. Pulse transformers and triggering sources for SCR1 and SCR3, as well as the details of the pulse sources themselves, have been omitted for the sake of simplicity.
Controlled bridge rectifiers are not limited to single-phase designs. In most industrial control systems, AC power is available in a three-phase form for maximum efficiency, and solid-state control circuits are built to take advantage of that. A three-phase controlled rectifier circuit built with SCRs, without pulse transformers or triggering circuitry shown, would look like Figure below.
Three-phase bridge SCR control of load
REVIEW:
• A Silicon-Controlled Rectifier, or SCR, is essentially a Shockley diode with an extra terminal added. This extra terminal is called the gate, and it is used to trigger the device into conduction (latch it) by the application of a small voltage. To trigger, or fire, an SCR, voltage must be applied between the gate and cathode, positive to the gate and negative to the cathode.
• When testing an SCR, a momentary connection between the gate and anode is sufficient in polarity, intensity, and duration to trigger it. SCRs may be fired by an intentional triggering of the gate terminal, excessive voltage (breakdown) between anode and cathode, or an excessive rate of voltage rise between the anode and cathode. SCRs may be turned off by anode current falling below the holding current value (low-current dropout) or by “reverse-firing” the gate (applying a negative voltage to the gate). Reverse-firing is only sometimes effective and always involves high gate current.
• A variant of the SCR called a Gate-Turn-Off thyristor (GTO), is specifically designed to be turned off by means of reverse triggering. Even then, reverse triggering requires fairly high current: typically 20% of the anode current. SCR terminals may be identified by a continuity meter: the only two terminals showing any continuity between them at all should be the gate and cathode. Gate and cathode terminals connect to a PN junction inside the SCR, so a continuity meter should obtain a diode-like reading between these two terminals with the red (+) lead on the gate and the black (-) lead on the cathode. Beware, though, that some large SCRs have an internal resistor connected between gate and cathode, which will affect any continuity readings taken by a meter.
• SCRs are true rectifiers: they only allow current through them in one direction. This means they cannot be used alone for full-wave AC power control. If the diodes in a rectifier circuit are replaced by SCRs, you have the makings of a controlled rectifier circuit, whereby DC power to a load may be time-proportioned by triggering the SCRs at different points along the AC power waveform. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/07%3A_Thyristors/7.05%3A_The_Silicon-Controlled_Rectifier_%28SCR%29.txt |
SCRs are unidirectional (one-way) current devices, making them useful for controlling DC only. If two SCRs are joined in back-to-back parallel fashion just like two Shockley diodes were joined together to form a DIAC, we have a new device known as the TRIAC: (Figure below)
The TRIAC SCR equivalent and, TRIAC schematic symbol
Because individual SCRs are more flexible to use in advanced control systems, these are more commonly seen in circuits like motor drives; TRIACs are usually seen in simple, low-power applications like household dimmer switches. A simple lamp dimmer circuit is shown in Figure below, complete with the phase-shifting resistor-capacitor network necessary for after-peak firing.
TRIAC phase-control of power
TRIACs are notorious for not firing symmetrically. This means these usually won’t trigger at the exact same gate voltage level for one polarity as for the other. Generally speaking, this is undesirable, because unsymmetrical firing results in a current waveform with a greater variety of harmonic frequencies. Waveforms that are symmetrical above and below their average centerlines are comprised of only odd-numbered harmonics. Unsymmetrical waveforms, on the other hand, contain even-numbered harmonics (which may or may not be accompanied by odd-numbered harmonics as well).
In the interest of reducing total harmonic content in power systems, the fewer and less diverse the harmonics, the better—one more reason individual SCRs are favored over TRIACs for complex, high-power control circuits. One way to make the TRIAC’s current waveform more symmetrical is to use a device external to the TRIAC to time the triggering pulse. A DIAC placed in series with the gate does a fair job of this: (Figure below)
DIAC improves symmetry of control
DIAC breakover voltages tend to be much more symmetrical (the same in one polarity as the other) than TRIAC triggering voltage thresholds. Since the DIAC prevents any gate current until the triggering voltage has reached a certain, repeatable level in either direction, the firing point of the TRIAC from one half-cycle to the next tends to be more consistent, and the waveform more symmetrical above and below its centerline.
Practically all the characteristics and ratings of SCRs apply equally to TRIACs, except that TRIACs of course are bidirectional (can handle current in both directions). Not much more needs to be said about this device except for an important caveat concerning its terminal designations.
From the equivalent circuit diagram shown earlier, one might think that main terminals 1 and 2 were interchangeable. These are not! Although it is helpful to imagine the TRIAC as being composed of two SCRs joined together, it in fact is constructed from a single piece of semiconducting material, appropriately doped and layered. The actual operating characteristics may differ slightly from that of the equivalent model.
This is made most evident by contrasting two simple circuit designs, one that works and one that doesn’t. The following two circuits are a variation of the lamp dimmer circuit shown earlier, the phase-shiftingcapacitor and DIAC removed for simplicity’s sake. Although the resulting circuit lacks the fine control ability of the more complex version (with capacitor and DIAC), it does function: (Figure below)
This circuit with the gate to MT2 does function.
Suppose we were to swap the two main terminals of the TRIAC around. According to the equivalent circuit diagram shown earlier in this section, the swap should make no difference. The circuit ought to work: (Figure below)
With the gate swapped to MT1, this circuit does not function.
However, if this circuit is built, it will be found that it does not work! The load will receive no power, the TRIAC refusing to fire at all, no matter how low or high a resistance value the control resistor is set to. The key to successfully triggering a TRIAC is to make sure the gate receives its triggering current from the main terminal 2 side of the circuit (the main terminal on the opposite side of the TRIAC symbol from the gate terminal). Identification of the MT1 and MT2 terminals must be done via the TRIAC’s part number with reference to a data sheet or book.
Review
• A TRIAC acts much like two SCRs connected back-to-back for bidirectional (AC) operation.
• TRIAC controls are more often seen in simple, low-power circuits than complex, high-power circuits. In large power control circuits, multiple SCRs tend to be favored.
• When used to control AC power to a load, TRIACs are often accompanied by DIACs connected in series with their gate terminals. The DIAC helps the TRIAC fire more symmetrically (more consistently from one polarity to another).
• Main terminals 1 and 2 on a TRIAC are not interchangeable.
• To successfully trigger a TRIAC, gate current must come from the main terminal 2 (MT2) side of the circuit!
7.07: Optothyristors
Like bipolar transistors, SCRs and TRIACs are also manufactured as light-sensitive devices, the action of impinging light replacing the function of triggering voltage.
Optically-controlled SCRs are often known by the acronym LASCR, or Light Activated SCR. Its symbol, not surprisingly, looks like Figure below.
Light activated SCR
Optically-controlled TRIACs don’t receive the honor of having their own acronym, but instead are humbly known as opto-TRIACs. Their schematic symbol is shown in Figure below.
Opto-TRIAC
Optothyristors (a general term for either the LASCR or the opto-TRIAC) are commonly found inside sealed “optoisolator” modules. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/07%3A_Thyristors/7.06%3A_The_TRIAC.txt |
Unijunction transistor: Although a unijunction transistor is not a thyristor, this device can trigger larger thyristors with a pulse at base B1. A unijunction transistor is composed of a bar of N-type silicon having a P-type connection in the middle. See Figure below(a). The connections at the ends of the bar are known as bases B1 and B2; the P-type mid-point is the emitter. With the emitter disconnected, the total resistance RBBO, a datasheet item, is the sum of RB1 and RB2 as shown in Figure below(b). RBBO ranges from 4-12kΩ for different device types. The intrinsic standoff ratio η is the ratio of RB1 to RBBO. It varies from 0.4 to 0.8 for different devices. The schematic symbol is Figure below(c)
Unijunction transistor: (a) Construction, (b) Model, (c) Symbol
The Unijunction emitter current vs voltage characteristic curve (Figure below(a) ) shows that as VEincreases, current IE increases up IP at the peak point. Beyond the peak point, current increases as voltage decreases in the negative resistance region. The voltage reaches a minimum at the valley point. The resistance of RB1, the saturation resistance is lowest at the valley point.
IP and IV, are datasheet parameters; For a 2n2647, IP and IV are 2µA and 4mA, respectively. [AMS] VP is the voltage drop across RB1 plus a 0.7V diode drop; see Figure below(b). VV is estimated to be approximately 10% of VBB.
Unijunction transistor: (a) emitter characteristic curve, (b) model for VP .
The relaxation oscillator in Figure below is an application of the unijunction oscillator. RE charges CE until the peak point. The unijunction emitter terminal has no effect on the capacitor until this point is reached. Once the capacitor voltage, VE, reaches the peak voltage point VP, the lowered emitter-base1 E-B1 resistance quickly discharges the capacitor. Once the capacitor discharges below the valley point VV, the E-RB1 resistance reverts back to high resistance, and the capacitor is free to charge again.
Unijunction transistor relaxation oscillator and waveforms. Oscillator drives SCR.
During capacitor discharge through the E-B1 saturation resistance, a pulse may be seen on the external B1 and B2 load resistors, Figure above. The load resistor at B1 needs to be low to not affect the discharge time. The external resistor at B2 is optional. It may be replaced by a short circuit. The approximate frequency is given by 1/f = T = RC. A more accurate expression for frequency is given in Figure above.
The charging resistor RE must fall within certain limits. It must be small enough to allow IP to flow based on the VBB supply less VP. It must be large enough to supply IV based on the VBB supply less VV. [MHW] The equations and an example for a 2n2647:
Programmable Unijunction Transistor (PUT): Although the unijunction transistor is listed as obsolete (read expensive if obtainable), the programmable unijunction transistor is alive and well. It is inexpensive and in production. Though it serves a function similar to the unijunction transistor, the PUT is a three terminal thyristor. The PUT shares the four-layer structure typical of thyristors shown in Figure below. Note that the gate, an N-type layer near the anode, is known as an “anode gate”. Moreover, the gate lead on the schematic symbol is attached to the anode end of the symbol.
Programmable unijunction transistor: Characteristic curve, internal construction, schematic symbol.
The characteristic curve for the programmable unijunction transistor in Figure above is similar to that of the unijunction transistor. This is a plot of anode current IA versus anode voltage VA. The gate lead voltage sets, programs, the peak anode voltage VP. As anode current inceases, voltage increases up to the peak point. Thereafter, increasing current results in decreasing voltage, down to the valley point.
The PUT equivalent of the unijunction transistor is shown in Figure below. External PUT resistors R1 and R2 replace unijunction transistor internal resistors RB1 and RB2, respectively. These resistors allow the calculation of the intrinsic standoff ratio η.
PUT equivalent of unijunction transistor
Figure below shows the PUT version of the unijunction relaxation oscillator Figure previous. Resistor R charges the capacitor until the peak point, Figure previous, then heavy conduction moves the operating point down the negative resistance slope to the valley point. A current spike flows through the cathode during capacitor discharge, developing a voltage spike across the cathode resistors. After capacitor discharge, the operating point resets back to the slope up to the peak point.
PUT relaxation oscillator
Problem: What is the range of suitable values for R in Figure above, a relaxation oscillator? The charging resistor must be small enough to supply enough current to raise the anode to VP the peak point (Figure previous) while charging the capacitor. Once VP is reached, anode voltage decreases as current increases (negative resistance), which moves the operating point to the valley. It is the job of the capacitor to supply the valley current IV. Once it is discharged, the operating point resets back to the upward slope to the peak point. The resistor must be large enough so that it will never supply the high valley current IP. If the charging resistor ever could supply that much current, the resistor would supply the valley current after the capacitor was discharged and the operating point would never reset back to the high resistance condition to the left of the peak point.
We select the same VBB=10V used for the unijunction transistor example. We select values of R1 and R2 so that η is about 2/3. We calculate η and VS. The parallel equivalent of R1, R2 is RG, which is only used to make selections from Table below. Along with VS=10, the closest value to our 6.3, we find VT=0.6V, in Table below and calculate VP.
We also find IP and IV, the peak and valley currents, respectively in Table below. We still need VV, the valley voltage. We used 10% of VBB= 1V, in the previous unijunction example. Consulting the datasheet, we find the forward voltage VF=0.8V at IF=50mA. The valley current IV=70µA is much less than IF=50mA. Therefore, VV must be less than VF=0.8V. How much less? To be safe we set VV=0V. This will raise the lower limit on the resistor range a little.
Choosing R > 143k guarantees that the operating point can reset from the valley point after capacitor discharge. R < 755k allows charging up to VP at the peak point.
Figure below show the PUT relaxation oscillator with the final resistor values. A practical application of a PUT triggering an SCR is also shown. This circuit needs a VBB unfiltered supply (not shown) divided down from the bridge rectifier to reset the relaxation oscillator after each power zero crossing. The variable resistor should have a minimum resistor in series with it to prevent a low pot setting from hanging at the valley point.
PUT relaxation oscillator with component values. PUT drives SCR lamp dimmer.
PUT timing circuits are said to be usable to 10kHz. If a linear ramp is required instead of an exponential ramp, replace the charging resistor with a constant current source such as a FET based constant current diode. A substitute PUT may be built from a PNP and NPN silicon transistor as shown for the SCS equivalent circuit in Figure below by omitting the cathode gate and using the anode gate.
Review
• A unijunction transistor consists of two bases (B1, B2) attached to a resistive bar of silicon, and an emitter in the center. The E-B1 junction has negative resistance properties; it can switch between high and low resistance.
• A PUT (programmable unijunction transistor) is a 3-terminal 4-layer thyristor acting like a unijunction transistor. An external resistor network “programs” η.
• The intrinsic standoff ratio is η=R1/(R1+R2) for a PUT; substitute RB1 and RB2, respectively, for a unijunction transistor. The trigger voltage is determined by η.
• Unijunction transistors and programmable unijunction transistors are applied to oscillators, timing circuits, and thyristor triggering. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/07%3A_Thyristors/7.08%3A_The_Unijunction_Transistor_%28UJT%29.txt |
If we take the equivalent circuit for an SCR and add another external terminal, connected to the base of the top transistor and the collector of the bottom transistor, we have a device known as a silicon-controlled-switch, or SCS: (Figure below)
The Silicon-Controlled Switch(SCS)
This extra terminal allows more control to be exerted over the device, particularly in the mode of forced commutation, where an external signal forces it to turn off while the main current through the device has not yet fallen below the holding current value. Note that the motor is in the anode gate circuit in Figure below. This is correct, although it doesn’t look right. The anode lead is required to switch the SCS off. Therefore the motor cannot be in series with the anode.
SCS: Motor start/stop circuit, an equivalent circuit with two transistors.
When the “on” pushbutton switch is actuated, the voltage applied between the cathode gate and the cathode, forward-biases the lower transistor’s base-emitter junction, and turning it on. The top transistor of the SCS is ready to conduct, having been supplied with a current path from its emitter terminal (the SCS’s anode terminal) through resistor R2 to the positive side of the power supply. As in the case of the SCR, both transistors turn on and maintain each other in the “on” mode. When the lower transistor turns on, it conducts the motor’s load current, and the motor starts and runs. The motor may be stopped by interrupting the power supply, as with an SCR, and this is called natural commutation. However, the SCS provides us with another means of turning off: forced commutation by shorting the anode terminal to the cathode. [GE1] If this is done (by actuating the “off” pushbutton switch), the upper transistor within the SCS will lose its emitter current, thus halting current through the base of the lower transistor. When the lower transistor turns off, it breaks the circuit for base current through the top transistor (securing its “off” state), and the motor (making it stop). The SCS will remain in the off condition until such time that the “on” pushbutton switch is re-actuated.
Review
• A silicon-controlled switch, or SCS, is essentially an SCR with an extra gate terminal.
• Typically, the load current through an SCS is carried by the anode gate and cathode terminals, with the cathode gate and anode terminals sufficing as control leads.
• An SCS is turned on by applying a positive voltage between the cathode gate and cathode terminals. It may be turned off (forced commutation) by applying a negative voltage between the anode and cathodeterminals, or simply by shorting those two terminals together. The anode terminal must be kept positive with respect to the cathode in order for the SCS to latch.
7.10: Field-effect-controlled Thyristors
Two relatively recent technologies designed to reduce the “driving” (gate trigger current) requirements of classic thyristor devices are the MOS-gated thyristor and the MOS Controlled Thyristor, or MCT.
The MOS-gated thyristor uses a MOSFET to initiate conduction through the upper (PNP) transistor of a standard thyristor structure, thus triggering the device. Since a MOSFET requires negligible current to “drive” (cause it to saturate), this makes the thyristor as a whole very easy to trigger: (Figure below)
MOS-gated thyristor equivalent circuit
Given the fact that ordinary SCRs are quite easy to “drive” as it is, the practical advantage of using an even more sensitive device (a MOSFET) to initiate triggering is debatable. Also, placing a MOSFET at the gate input of the thyristor now makes it impossible to turn it off by a reverse-triggering signal. Only low-current dropout can make this device stop conducting after it has been latched.
A device of arguably greater value would be a fully-controllable thyristor, whereby a small gate signal could both trigger the thyristor and force it to turn off. Such a device does exist, and it is called the MOS Controlled Thyristor, or MCT. It uses a pair of MOSFETs connected to a common gate terminal, one to trigger the thyristor and the other to “untrigger” it: (Figure below)
MOS-controlled thyristor (MCT) equivalent circuit
A positive gate voltage (with respect to the cathode) turns on the upper (N-channel) MOSFET, allowing base current through the upper (PNP) transistor, which latches the transistor pair in an “on” state. Once both transistors are fully latched, there will be little voltage dropped between anode and cathode, and the thyristor will remain latched as long as the controlled current exceeds the minimum (holding) current value. However, if a negative gate voltage is applied (with respect to the anode, which is at nearly the same voltage as the cathode in the latched state), the lower MOSFET will turn on and “short” between the lower (NPN) transistor’s base and emitter terminals, thus forcing it into cutoff. Once the NPN transistor cuts off, the PNP transistor will drop out of conduction, and the whole thyristor turns off. Gate voltage has full control over conduction through the MCT: to turn it on and to turn it off.
This device is still a thyristor, though. If zero voltage is applied between gate and cathode, neither MOSFET will turn on. Consequently, the bipolar transistor pair will remain in whatever state it was last in (hysteresis). So, a brief positive pulse to the gate turns the MCT on, a brief negative pulse forces it off, and no applied gate voltage lets it remain in whatever state it is already in. In essence, the MCT is a latching version of the IGBT (Insulated Gate Bipolar Transistor).
Review
• A MOS-gated thyristor uses an N-channel MOSFET to trigger a thyristor, resulting in an extremely low gate current requirement.
• A MOS Controlled Thyristor, or MCT, uses two MOSFETS to exert full control over the thyristor. A positive gate voltage triggers the device; a negative gate voltage forces it to turn off. Zero gate voltage allows the thyristor to remain in whatever state it was previously in (off, or latched on). | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/07%3A_Thyristors/7.09%3A_The_Silicon-Controlled_Switch_%28SCS%29.txt |
What is an Operation Amplifier(Op-amp)?
Operational Amplifiers, also known as Op-amps, are basically a voltage amplifying device designed to be used with components like capacitors and resistors, between its in/out terminals. They are essentially a core part of analog devices. Feedback components like these are used to determine the operation of the amplifier. The amplifier can perform many different operations (resistive, capacitive, or both), Giving it the name Operational Amplifier.
Example of an Op-amp in schematics.
Op-amps are linear devices that are ideal for DC amplification and are used often in signal conditioning, filtering or other mathematical operations (add, subtract, integration and differentiation).
The operational amplifier is arguably the most useful single device in analog electronic circuitry. With only a handful of external components, it can be made to perform a wide variety of analog signal processing tasks. It is also quite affordable, most general-purpose amplifiers selling for under a dollar apiece. Modern designs have been engineered with durability in mind as well: several “op-amps” are manufactured that can sustain direct short-circuits on their outputs without damage.
One key to the usefulness of these little circuits is in the engineering principle of feedback, particularly negative feedback, which constitutes the foundation of almost all automatic control processes. The principles presented in this section, extend well beyond the immediate scope of electronics. It is well worth the electronics student’s time to learn these principles and learn them well.
Further Reading
Operational amplifiers, or opamps, are one of the most fundamental building blocks an electrical engineer can employ in circuit designs. There are a ton of useful applications for opamps. This article will go over just a few basic circuits you can implement in your designs!
The Basics: Voltage Followers
The first circuit is so simple that it almost looks a little crazy:
Figure 1: Voltage Follower
This circuit is referred to as a voltage follower, and it behaves like this:
Vin=VoutVin=Vout
On face, this isn't super useful. Why would I pay a few extra cents for an opamp when it looks like a wire would do the same job between two components? The answer is simple once you know a few simple things about opamps. When you start to break down a circuit with opamps, two basic principles should be at the forefront of your mind:
1. The opamp's input terminals, V+ and V-, draw no current.
2. The voltage of V+ and V- are always equal. This property is sometimes called the virtual short approximation.
Looking at the first rule, we can see that our voltage follower circuit is not drawing any current at the input terminal connected to V+. This is really just a way of saying that V+ has a really high impedance - in fact, since we're talking about ideal opamps, we tend to just say that it has infinite input impedance. In practice, this has some pretty neat implications: if V+ isn't drawing any current, then it means that we could connect Vin to any node in any circuit and measure it without modifying the original circuit. We wouldn't have to go through the tedious rigamarole of solving a bunch of new equations for node voltages and mesh currents, because we wouldn't be disturbing either of them by adding a voltage follower. Pretty cool, huh?
(Note: Like most rules, there are some exceptions to these opamp rules. For the duration of this article, we're going to ignore these exceptions - they would get in the way of analyzing our voltage follower.)
Instead of taking a direct measurement at Vin in our hyptothetical circuit, we'd measure instead at Vout. This is the second rule of opamps in effect - the voltages of V+ and V- are always considered to be equal. Since we've connected V- and and the opamp's output, we can extend this a step further, and say that Vout = V- = V+ due to the virtual short approximation.
Using voltage followers provides a really easy way to interface different circuits that have different impedances. Cool! What else can we do with opamps?
Changing Gain - An Inverting Amplifier
As their name suggests, opamps are amplifiers. They can amplify signals by a certain ratio of input to output. This ratio is commonly referred to as the gain of an operational amplifier. In a perfect world, an opamp's gain would be infinite - so high that it could amplify any signal level to any other signal level. This isn't the case in the real world, but we'll consider it a fact while we analyze the next circuit: an inverting amplifier.
Figure 2: Inverting Amplifier
Let's walk through this circuit's operation step by step. First, let's apply our two opamp rules to figure out some node voltages of this circuit. The simplest one to apply is the virtual short approximation, where V+ and V- are always at the same voltage. We can see that V+ is tied to ground; therefore, V- must also be at ground. What about the current going into and out of node V-? By Kirchoff's current law, we know that the sum of all currents at that node must be as follows:
Initially, this looks like it might take some work to solve, as this equation has three unknowns. But does it? If you recall the opamp rules stated earlier, you'll see that we get one term of this equation for free: opamp inputs don't draw any current! Therefore, we know that iV- is equal to zero. We can then rearrange that equation into the following form:
Since V- is tied to ground by the virtual short, Ohm's law allows us to substitute out these currents as voltages and resistances:
Which, with a little algebra, gets us back to where we started:
It's pretty clear why this circuit is useful - it allows you to apply a linear gain to an input and output by choosing (Rf/Rin) to form any ratio you want. The circuit also has the added bonus of allowing you a lot of control over its input impedance - since you're free to choose the resistor value of Rin, you can crank it up as high or as low as needed to fit whatever output impedance you need to match it to!
Why do we need a resistor network to achieve this behavior? To understand that, we'll have to understand a little bit more of how an opamp works. An opamp is a type of voltage amplifier. In the ideal case, an opamp provides infinite gain - it can amplify any voltage to any other voltage level. We can scale the opamp's infinte gain by using a resistor network that connects the input node, V-, and the output node. By connecting the opamp output to an input, we're using a process called _feedback_ to adjust the output voltage to a desired level. Feedback is a really important EE concept, and complex enough to warrant a whole article dedicated to the topic. For now, it's enough to understand the basic principle as it applies to opamps: by connecting the output to an input, you can modify a circuit's behavior in really useful ways.
An Inverse of an Inverter?
Let's see what happens when we start fooling around with the basic inverting amplifier design. What happens if we swap out the feedback network to the other input pin, V-?
Figure 3: What does this circuit do?
We can go through the same series of steps as we did before with the inverting amplifier, but we start substituting in voltages at the V- node. Due to the virtual short approximation, V- = V+ = Vin. As a result, we can write the following equation for the current going through Rg:
Since we know that the opamp isn't drawing any current, we know that the current through Rg and Rf must be equal, which allows us to write this equation:
The virtual short approximation lets us get rid of V-, since we know it is equal to Vin.
And with a bit of algebraic rearranging, we get the following:
Unlike the previous circuit, the gain of this circuit is nonnegative. As a result, this circuit is called a noninverting amplifier: It provides a linear gain, but with a positive sign. Unlike the previous noninverting amplifier, it cannot provide any gain less than unity - it's impossible to set the feedback network any lower! On the other hand, this circuit does provide one thing that the inverting amplifier does not. Since the output is positive, it is in phase with the input. The inverting amplifier, by virtue of applying a negative gain, is shifting the output signal by 180 degrees. The noninverting amplifier doesn't do this!
Wrapping Up
Opamps are really versatile circuit components. This article barely scratches the surface of what can be done with them - the range of functionality that they can bring to bear is enormous. What are some of the other circuits you can make with them? Have any cool circuits you've built with opamps? Leave us a note in the comment section and tell us about it! | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/08%3A_Operational_Amplifiers/8.01%3A_Introduction_to_Operational_Amplifiers_%28Op-amps%29.txt |
For ease of drawing complex circuit diagrams, electronic amplifiers are often symbolized by a simple triangle shape, where the internal components are not individually represented. This symbology is very handy for cases where an amplifier’s construction is irrelevant to the greater function of the overall circuit, and it is worthy of familiarization:
The +V and -V connections denote the positive and negative sides of the DC power supply, respectively. The input and output voltage connections are shown as single conductors, because it is assumed that all signal voltages are referenced to a common connection in the circuit called ground. Often (but not always!), one pole of the DC power supply, either positive or negative, is that ground reference point. A practical amplifier circuit (showing the input voltage source, load resistance, and power supply) might look like this:
Without having to analyze the actual transistor design of the amplifier, you can readily discern the whole circuit’s function: to take an input signal (Vin), amplify it, and drive a load resistance (Rload). To complete the above schematic, it would be good to specify the gains of that amplifier (AV, AI, AP) and the Q (bias) point for any needed mathematical analysis.
If it is necessary for an amplifier to be able to output true AC voltage (reversing polarity) to the load, a split DC power supply may be used, whereby the ground point is electrically “centered” between +V and -V. Sometimes the split power supply configuration is referred to as a dual power supply.
The amplifier is still being supplied with 30 volts overall, but with the split voltage DC power supply, the output voltage across the load resistor can now swing from a theoretical maximum of +15 volts to -15 volts, instead of +30 volts to 0 volts. This is an easy way to get true alternating current (AC) output from an amplifier without resorting to capacitive or inductive (transformer) coupling on the output. The peak-to-peak amplitude of this amplifier’s output between cutoff and saturation remains unchanged.
By signifying a transistor amplifier within a larger circuit with a triangle symbol, we ease the task of studying and analyzing more complex amplifiers and circuits. One of these more complex amplifier types that we’ll be studying is called the differential amplifier. Unlike normal amplifiers, which amplify a single input signal (often called single-ended amplifiers), differential amplifiers amplify the voltage difference between two input signals. Using the simplified triangle amplifier symbol, a differential amplifier looks like this:
The two input leads can be seen on the left-hand side of the triangular amplifier symbol, the output lead on the right-hand side, and the +V and -V power supply leads on top and bottom. As with the other example, all voltages are referenced to the circuit’s ground point. Notice that one input lead is marked with a (-) and the other is marked with a (+). Because a differential amplifier amplifies the difference in voltage between the two inputs, each input influences the output voltage in opposite ways. Consider the following table of input/output voltages for a differential amplifier with a voltage gain of 4:
An increasingly positive voltage on the (+) input tends to drive the output voltage more positive, and an increasingly positive voltage on the (-) input tends to drive the output voltage more negative. Likewise, an increasingly negative voltage on the (+) input tends to drive the output negative as well, and an increasingly negative voltage on the (-) input does just the opposite. Because of this relationship between inputs and polarities, the (-) input is commonly referred to as the inverting input and the (+) as the noninverting input. It may be helpful to think of a differential amplifier as a variable voltage source controlled by a sensitive voltmeter, as such:
Bear in mind that the above illustration is only a model to aid in understanding the behavior of a differential amplifier. It is not a realistic schematic of its actual design. The “G” symbol represents a galvanometer, a sensitive voltmeter movement. The potentiometer connected between +V and -V provides a variable voltage at the output pin (with reference to one side of the DC power supply), that variable voltage set by the reading of the galvanometer. It must be understood that any load powered by the output of a differential amplifier gets its current from the DC power source (battery), not the input signal. The input signal (to the galvanometer) merely controls the output. This concept may at first be confusing to students new to amplifiers. With all these polarities and polarity markings (- and +) around, its easy to get confused and not know what the output of a differential amplifier will be. To address this potential confusion, here’s a simple rule to remember:
When the polarity of the differential voltage matches the markings for inverting and noninverting inputs, the output will be positive. When the polarity of the differential voltage clashes with the input markings, the output will be negative. This bears some similarity to the mathematical sign displayed by digital voltmeters based on input voltage polarity. The red test lead of the voltmeter (often called the “positive” lead because of the color red’s popular association with the positive side of a power supply in electronic wiring) is more positive than the black, the meter will display a positive voltage figure, and vice versa:
Just as a voltmeter will only display the voltage between its two test leads, an ideal differential amplifier only amplifies the potential difference between its two input connections, not the voltage between any one of those connections and ground. The output polarity of a differential amplifier, just like the signed indication of a digital voltmeter, depends on the relative polarities of the differential voltage between the two input connections.
If the input voltages to this amplifier represented mathematical quantities (as is the case within analog computer circuitry), or physical process measurements (as is the case within analog electronic instrumentation circuitry), you can see how a device such as a differential amplifier could be very useful. We could use it to compare two quantities to see which is greater (by the polarity of the output voltage), or perhaps we could compare the difference between two quantities (such as the level of liquid in two tanks) and flag an alarm (based on the absolute value of the amplifier output) if the difference became too great. In basic automatic control circuitry, the quantity being controlled (called the process variable) is compared with a target value (called the setpoint), and decisions are made as to how to act based on the discrepancy between these two values. The first step in electronically controlling such a scheme is to amplify the difference between the process variable and the setpoint with a differential amplifier. In simple controller designs, the output of this differential amplifier can be directly utilized to drive the final control element (such as a valve) and keep the process reasonably close to setpoint.
Review
• A “shorthand” symbol for an electronic amplifier is a triangle, the wide end signifying the input side and the narrow end signifying the output. Power supply lines are often omitted in the drawing for simplicity.
• To facilitate true AC output from an amplifier, we can use what is called a split or dual power supply, with two DC voltage sources connected in series with the middle point grounded, giving a positive voltage to ground (+V) and a negative voltage to ground (-V). Split power supplies like this are frequently used in differential amplifier circuits.
• Most amplifiers have one input and one output. Differential amplifiers have two inputs and one output, the output signal being proportional to the difference in signals between the two inputs.
• The voltage output of a differential amplifier is determined by the following equation: Vout = AV(Vnoninv - Vinv) | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/08%3A_Operational_Amplifiers/8.02%3A_Single-ended_and_Differential_Amplifiers.txt |
Long before the advent of digital electronic technology, computers were built to electronically perform calculations by employing voltages and currents to represent numerical quantities. This was especially useful for the simulation of physical processes. A variable voltage, for instance, might represent velocity or force in a physical system. Through the use of resistive voltage dividers and voltage amplifiers, the mathematical operations of division and multiplication could be easily performed on these signals.
The reactive properties of capacitors and inductors lend themselves well to the simulation of variables related by calculus functions. Remember how the current through a capacitor was a function of the voltage’s rate of change, and how that rate of change was designated in calculus as the derivative? Well, if voltage across a capacitor were made to represent the velocity of an object, the current through the capacitor would represent the force required to accelerate or decelerate that object, the capacitor’s capacitance representing the object’s mass:
This analog electronic computation of the calculus derivative function is technically known as differentiation, and it is a natural function of a capacitor’s current in relation to the voltage applied across it. Note that this circuit requires no “programming” to perform this relatively advanced mathematical function as a digital computer would.
Electronic circuits are very easy and inexpensive to create compared to complex physical systems, so this kind of analog electronic simulation was widely used in the research and development of mechanical systems. For realistic simulation, though, amplifier circuits of high accuracy and easy configurability were needed in these early computers.
It was found in the course of analog computer design that differential amplifiers with extremely high voltage gains met these requirements of accuracy and configurability better than single-ended amplifiers with custom-designed gains. Using simple components connected to the inputs and output of the high-gain differential amplifier, virtually any gain and any function could be obtained from the circuit, overall, without adjusting or modifying the internal circuitry of the amplifier itself. These high-gain differential amplifiers came to be known as operational amplifiers, or op-amps, because of their application in analog computers’ mathematical operations.
Modern op-amps, like the popular model 741, are high-performance, inexpensive integrated circuits. Their input impedances are quite high, the inputs drawing currents in the range of half a microamp (maximum) for the 741, and far less for op-amps utilizing field-effect input transistors. Output impedance is typically quite low, about 75 Ω for the model 741, and many models have built-in output short circuit protection, meaning that their outputs can be directly shorted to ground without causing harm to the internal circuitry. With direct coupling between op-amps’ internal transistor stages, they can amplify DC signals just as well as AC (up to certain maximum voltage-risetime limits). It would cost far more in money and time to design a comparable discrete-transistor amplifier circuit to match that kind of performance, unless high power capability was required. For these reasons, op-amps have all but obsoleted discrete-transistor signal amplifiers in many applications.
The following diagram shows the pin connections for single op-amps (741 included) when housed in an 8-pin DIP (Dual Inline Package) integrated circuit:
Some models of op-amp come two to a package, including the popular models TL082 and 1458. These are called “dual” units, and are typically housed in an 8-pin DIP package as well, with the following pin connections:
Operational amplifiers are also available four to a package, usually in 14-pin DIP arrangements. Unfortunately, pin assignments aren’t as standard for these “quad” op-amps as they are for the “dual” or single units. Consult the manufacturer datasheet(s) for details.
Practical operational amplifier voltage gains are in the range of 200,000 or more, which makes them almost useless as an analog differential amplifier by themselves. For an op-amp with a voltage gain (AV) of 200,000 and a maximum output voltage swing of +15V/-15V, all it would take is a differential input voltage of 75 µV (microvolts) to drive it to saturation or cutoff! Before we take a look at how external components are used to bring the gain down to a reasonable level, let’s investigate applications for the “bare” op-amp by itself.
One application is called the comparator. For all practical purposes, we can say that the output of an op-amp will be saturated fully positive if the (+) input is more positive than the (-) input, and saturated fully negative if the (+) input is less positive than the (-) input. In other words, an op-amp’s extremely high voltage gain makes it useful as a device to compare two voltages and change output voltage states when one input exceeds the other in magnitude.
In the above circuit, we have an op-amp connected as a comparator, comparing the input voltage with a reference voltage set by the potentiometer (R1). If Vin drops below the voltage set by R1, the op-amp’s output will saturate to +V, thereby lighting up the LED. Otherwise, if Vin is above the reference voltage, the LED will remain off. If Vin is a voltage signal produced by a measuring instrument, this comparator circuit could function as a “low” alarm, with the trip-point set by R1. Instead of an LED, the op-amp output could drive a relay, a transistor, an SCR, or any other device capable of switching power to a load such as a solenoid valve, to take action in the event of a low alarm.
Another application for the comparator circuit shown is a square-wave converter. Suppose that the input voltage applied to the inverting (-) input was an AC sine wave rather than a stable DC voltage. In that case, the output voltage would transition between opposing states of saturation whenever the input voltage was equal to the reference voltage produced by the potentiometer. The result would be a square wave:
Adjustments to the potentiometer setting would change the reference voltage applied to the noninverting (+) input, which would change the points at which the sine wave would cross, changing the on/off times, or duty cycle of the square wave:
It should be evident that the AC input voltage would not have to be a sine wave in particular for this circuit to perform the same function. The input voltage could be a triangle wave, sawtooth wave, or any other sort of wave that ramped smoothly from positive to negative to positive again. This sort of comparator circuit is very useful for creating square waves of varying duty cycle. This technique is sometimes referred to as pulse-width modulation, or PWM (varying, or modulating a waveform according to a controlling signal, in this case the signal produced by the potentiometer).
Another comparator application is that of the bargraph driver. If we had several op-amps connected as comparators, each with its own reference voltage connected to the inverting input, but each one monitoring the same voltage signal on their noninverting inputs, we could build a bargraph-style meter such as what is commonly seen on the face of stereo tuners and graphic equalizers. As the signal voltage (representing radio signal strength or audio sound level) increased, each comparator would “turn on” in sequence and send power to its respective LED. With each comparator switching “on” at a different level of audio sound, the number of LED’s illuminated would indicate how strong the signal was.
In the circuit shown above, LED1 would be the first to light up as the input voltage increased in a positive direction. As the input voltage continued to increase, the other LED’s would illuminate in succession, until all were lit.
This very same technology is used in some analog-to-digital signal converters, namely the flash converter, to translate an analog signal quantity into a series of on/off voltages representing a digital number.
Review
• A triangle shape is the generic symbol for an amplifier circuit, the wide end signifying the input and the narrow end signifying the output.
• Unless otherwise specified, all voltages in amplifier circuits are referenced to a common ground point, usually connected to one terminal of the power supply. This way, we can speak of a certain amount of voltage being “on” a single wire, while realizing that voltage is always measured between two points.
• A differential amplifier is one amplifying the voltage difference between two signal inputs. In such a circuit, one input tends to drive the output voltage to the same polarity of the input signal, while the other input does just the opposite. Consequently, the first input is called the noninverting (+) input and the second is called the inverting (-) input.
• An operational amplifier (or op-amp for short) is a differential amplifier with an extremely high voltage gain (AV = 200,000 or more). Its name hails from its original use in analog computer circuitry (performing mathematical operations).
• Op-amps typically have very high input impedances and fairly low output impedances.
• Sometimes op-amps are used as signal comparators, operating in full cutoff or saturation mode depending on which input (inverting or noninverting) has the greatest voltage. Comparators are useful in detecting “greater-than” signal conditions (comparing one to the other).
• One comparator application is called the pulse-width modulator, and is made by comparing a sine-wave AC signal against a DC reference voltage. As the DC reference voltage is adjusted, the square-wave output of the comparator changes its duty cycle (positive versus negative times). Thus, the DC reference voltage controls, or modulates the pulse width of the output voltage. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/08%3A_Operational_Amplifiers/8.03%3A_The_%E2%80%9COperational%E2%80%9D_Amplifier.txt |
If we connect the output of an op-amp to its inverting input and apply a voltage signal to the noninverting input, we find that the output voltage of the op-amp closely follows that input voltage (I’ve neglected to draw in the power supply, +V/-V wires, and ground symbol for simplicity):
As Vin increases, Vout will increase in accordance with the differential gain. However, as Vout increases, that output voltage is fed back to the inverting input, thereby acting to decrease the voltage differential between inputs, which acts to bring the output down. What will happen for any given voltage input is that the op-amp will output a voltage very nearly equal to Vin, but just low enough so that there’s enough voltage difference left between Vin and the (-) input to be amplified to generate the output voltage.
The circuit will quickly reach a point of stability (known as equilibrium in physics), where the output voltage is just the right amount to maintain the right amount of differential, which in turn produces the right amount of output voltage. Taking the op-amp’s output voltage and coupling it to the inverting input is a technique known as negative feedback, and it is the key to having a self-stabilizing system (this is true not only of op-amps, but of any dynamic system in general). This stability gives the op-amp the capacity to work in its linear (active) mode, as opposed to merely being saturated fully “on” or “off” as it was when used as a comparator, with no feedback at all.
Because the op-amp’s gain is so high, the voltage on the inverting input can be maintained almost equal to Vin. Let’s say that our op-amp has a differential voltage gain of 200,000. If Vin equals 6 volts, the output voltage will be 5.999970000149999 volts. This creates just enough differential voltage (6 volts - 5.999970000149999 volts = 29.99985 µV) to cause 5.999970000149999 volts to be manifested at the output terminal, and the system holds there in balance. As you can see, 29.99985 µV is not a lot of differential, so for practical calculations, we can assume that the differential voltage between the two input wires is held by negative feedback exactly at 0 volts.
One great advantage to using an op-amp with negative feedback is that the actual voltage gain of the op-amp doesn’t matter, so long as its very large. If the op-amp’s differential gain were 250,000 instead of 200,000, all it would mean is that the output voltage would hold just a little closer to Vin (less differential voltage needed between inputs to generate the required output). In the circuit just illustrated, the output voltage would still be (for all practical purposes) equal to the non-inverting input voltage. Op-amp gains, therefore, do not have to be precisely set by the factory in order for the circuit designer to build an amplifier circuit with precise gain. Negative feedback makes the system self-correcting. The above circuit as a whole will simply follow the input voltage with a stable gain of 1.
Going back to our differential amplifier model, we can think of the operational amplifier as being a variable voltage source controlled by an extremely sensitive null detector, the kind of meter movement or other sensitive measurement device used in bridge circuits to detect a condition of balance (zero volts). The “potentiometer” inside the op-amp creating the variable voltage will move to whatever position it must to “balance” the inverting and noninverting input voltages so that the “null detector” has zero voltage across it:
As the “potentiometer” will move to provide an output voltage necessary to satisfy the “null detector” at an “indication” of zero volts, the output voltage becomes equal to the input voltage: in this case, 6 volts. If the input voltage changes at all, the “potentiometer” inside the op-amp will change position to hold the “null detector” in balance (indicating zero volts), resulting in an output voltage approximately equal to the input voltage at all times.
This will hold true within the range of voltages that the op-amp can output. With a power supply of +15V/-15V, and an ideal amplifier that can swing its output voltage just as far, it will faithfully “follow” the input voltage between the limits of +15 volts and -15 volts. For this reason, the above circuit is known as a voltage follower. Like its one-transistor counterpart, the common-collector (“emitter-follower”) amplifier, it has a voltage gain of 1, a high input impedance, a low output impedance, and a high current gain. Voltage followers are also known as voltage buffers, and are used to boost the current-sourcing ability of voltage signals too weak (too high of source impedance) to directly drive a load. The op-amp model shown in the last illustration depicts how the output voltage is essentially isolated from the input voltage, so that current on the output pin is not supplied by the input voltage source at all, but rather from the power supply powering the op-amp.
It should be mentioned that many op-amps cannot swing their output voltages exactly to +V/-V power supply rail voltages. The model 741 is one of those that cannot: when saturated, its output voltage peaks within about one volt of the +V power supply voltage and within about 2 volts of the -V power supply voltage. Therefore, with a split power supply of +15/-15 volts, a 741 op-amp’s output may go as high as +14 volts or as low as -13 volts (approximately), but no further. This is due to its bipolar transistor design. These two voltage limits are known as the positive saturation voltage and negative saturation voltage, respectively. Other op-amps, such as the model 3130 with field-effect transistors in the final output stage, have the ability to swing their output voltages within millivolts of either power supply rail voltage. Consequently, their positive and negative saturation voltages are practically equal to the supply voltages.
Review
• Connecting the output of an op-amp to its inverting (-) input is called negative feedback. This term can be broadly applied to any dynamic system where the output signal is “fed back” to the input somehow so as to reach a point of equilibrium (balance).
• When the output of an op-amp is directly connected to its inverting (-) input, a voltage follower will be created. Whatever signal voltage is impressed upon the noninverting (+) input will be seen on the output.
• An op-amp with negative feedback will try to drive its output voltage to whatever level necessary so that the differential voltage between the two inputs is practically zero. The higher the op-amp differential gain, the closer that differential voltage will be to zero.
• Some op-amps cannot produce an output voltage equal to their supply voltage when saturated. The model 741 is one of these. The upper and lower limits of an op-amp’s output voltage swing are known as positive saturation voltage and negative saturation voltage, respectively. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/08%3A_Operational_Amplifiers/8.04%3A_Negative_Feedback.txt |
If we add a voltage divider to the negative feedback wiring so that only a fraction of the output voltage is fed back to the inverting input instead of the full amount, the output voltage will be a multiple of the input voltage (please bear in mind that the power supply connections to the op-amp have been omitted once again for simplicity’s sake):
If R1 and R2 are both equal and Vin is 6 volts, the op-amp will output whatever voltage is needed to drop 6 volts across R1 (to make the inverting input voltage equal to 6 volts, as well, keeping the voltage difference between the two inputs equal to zero). With the 2:1 voltage divider of R1 and R2, this will take 12 volts at the output of the op-amp to accomplish.
Another way of analyzing this circuit is to start by calculating the magnitude and direction of current through R1, knowing the voltage on either side (and therefore, by subtraction, the voltage across R1), and R1‘s resistance. Since the left-hand side of R1 is connected to ground (0 volts) and the right-hand side is at a potential of 6 volts (due to the negative feedback holding that point equal to Vin), we can see that we have 6 volts across R1. This gives us 6 mA of current through R1 from left to right. Because we know that both inputs of the op-amp have extremely high impedance, we can safely assume they won’t add or subtract any current through the divider. In other words, we can treat R1 and R2 as being in series with each other: all of the electrons flowing through R1 must flow through R2. Knowing the current through R2 and the resistance of R2, we can calculate the voltage across R2 (6 volts), and its polarity. Counting up voltages from ground (0 volts) to the right-hand side of R2, we arrive at 12 volts on the output.
Upon examining the last illustration, one might wonder, “where does that 6 mA of current go?” The last illustration doesn’t show the entire current path, but in reality it comes from the negative side of the DC power supply, through ground, through R1, through R2, through the output pin of the op-amp, and then back to the positive side of the DC power supply through the output transistor(s) of the op-amp. Using the null detector/potentiometer model of the op-amp, the current path looks like this:
The 6 volt signal source does not have to supply any current for the circuit: it merely commands the op-amp to balance voltage between the inverting (-) and noninverting (+) input pins, and in so doing produce an output voltage that is twice the input due to the dividing effect of the two 1 kΩ resistors.
We can change the voltage gain of this circuit, overall, just by adjusting the values of R1 and R2 (changing the ratio of output voltage that is fed back to the inverting input). Gain can be calculated by the following formula:
Note that the voltage gain for this design of amplifier circuit can never be less than 1. If we were to lower R2to a value of zero ohms, our circuit would be essentially identical to the voltage follower, with the output directly connected to the inverting input. Since the voltage follower has a gain of 1, this sets the lower gain limit of the noninverting amplifier. However, the gain can be increased far beyond 1, by increasing R2 in proportion to R1.
Also note that the polarity of the output matches that of the input, just as with a voltage follower. A positive input voltage results in a positive output voltage, and vice versa (with respect to ground). For this reason, this circuit is referred to as a noninverting amplifier.
Just as with the voltage follower, we see that the differential gain of the op-amp is irrelevant, so long as its very high. The voltages and currents in this circuit would hardly change at all if the op-amp’s voltage gain were 250,000 instead of 200,000. This stands as a stark contrast to single-transistor amplifier circuit designs, where the Beta of the individual transistor greatly influenced the overall gains of the amplifier. With negative feedback, we have a self-correcting system that amplifies voltage according to the ratios set by the feedback resistors, not the gains internal to the op-amp.
Let’s see what happens if we retain negative feedback through a voltage divider, but apply the input voltage at a different location:
By grounding the noninverting input, the negative feedback from the output seeks to hold the inverting input’s voltage at 0 volts, as well. For this reason, the inverting input is referred to in this circuit as a virtual ground, being held at ground potential (0 volts) by the feedback, yet not directly connected to (electrically common with) ground. The input voltage this time is applied to the left-hand end of the voltage divider (R1 = R2 = 1 kΩ again), so the output voltage must swing to -6 volts in order to balance the middle at ground potential (0 volts). Using the same techniques as with the noninverting amplifier, we can analyze this circuit’s operation by determining current magnitudes and directions, starting with R1, and continuing on to determining the output voltage.
We can change the overall voltage gain of this circuit, overall, just by adjusting the values of R1 and R2(changing the ratio of output voltage that is fed back to the inverting input). Gain can be calculated by the following formula:
Note that this circuit’s voltage gain can be less than 1, depending solely on the ratio of R2 to R1. Also note that the output voltage is always the opposite polarity of the input voltage. A positive input voltage results in a negative output voltage, and vice versa (with respect to ground). For this reason, this circuit is referred to as an inverting amplifier. Sometimes, the gain formula contains a negative sign (before the R2/R1 fraction) to reflect this reversal of polarities.
These two amplifier circuits we’ve just investigated serve the purpose of multiplying or dividing the magnitude of the input voltage signal. This is exactly how the mathematical operations of multiplication and division are typically handled in analog computer circuitry.
Review
• By connecting the inverting (-) input of an op-amp directly to the output, we get negative feedback, which gives us a voltage follower circuit. By connecting that negative feedback through a resistive voltage divider (feeding back a fraction of the output voltage to the inverting input), the output voltage becomes a multiple of the input voltage.
• A negative-feedback op-amp circuit with the input signal going to the noninverting (+) input is called a noninverting amplifier. The output voltage will be the same polarity as the input. Voltage gain is given by the following equation: AV = (R2/R1) + 1
• A negative-feedback op-amp circuit with the input signal going to the “bottom” of the resistive voltage divider, with the noninverting (+) input grounded, is called an inverting amplifier. Its output voltage will be the opposite polarity of the input. Voltage gain is given by the following equation: AV = -R2/R1 | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/08%3A_Operational_Amplifiers/8.05%3A_Divided_Feedback.txt |
A helpful analogy for understanding divided feedback amplifier circuits is that of a mechanical lever, with relative motion of the lever’s ends representing change in input and output voltages, and the fulcrum (pivot point) representing the location of the ground point, real or virtual.
Take for example the following noninverting op-amp circuit. We know from the prior section that the voltage gain of a noninverting amplifier configuration can never be less than unity (1). If we draw a lever diagram next to the amplifier schematic, with the distance between fulcrum and lever ends representative of resistor values, the motion of the lever will signify changes in voltage at the input and output terminals of the amplifier:
Physicists call this type of lever, with the input force (effort) applied between the fulcrum and output (load), a third-class lever. It is characterized by an output displacement (motion) at least as large than the input displacement—a “gain” of at least 1—and in the same direction. Applying a positive input voltage to this op-amp circuit is analogous to displacing the “input” point on the lever upward:
Due to the displacement-amplifying characteristics of the lever, the “output” point will move twice as far as the “input” point, and in the same direction. In the electronic circuit, the output voltage will equal twice the input, with the same polarity. Applying a negative input voltage is analogous to moving the lever downward from its level “zero” position, resulting in an amplified output displacement that is also negative:
If we alter the resistor ratio R2/R1, we change the gain of the op-amp circuit. In lever terms, this means moving the input point in relation to the fulcrum and lever end, which similarly changes the displacement “gain” of the machine:
Now, any input signal will become amplified by a factor of four instead of by a factor of two:
Inverting op-amp circuits may be modeled using the lever analogy as well. With the inverting configuration, the ground point of the feedback voltage divider is the op-amp’s inverting input with the input to the left and the output to the right. This is mechanically equivalent to a first-class lever, where the input force (effort) is on the opposite side of the fulcrum from the output (load):
With equal-value resistors (equal-lengths of lever on each side of the fulcrum), the output voltage (displacement) will be equal in magnitude to the input voltage (displacement), but of the opposite polarity (direction). A positive input results in a negative output:
Changing the resistor ratio R2/R1 changes the gain of the amplifier circuit, just as changing the fulcrum position on the lever changes its mechanical displacement “gain.” Consider the following example, where R2is made twice as large as R1:
With the inverting amplifier configuration, though, gains of less than 1 are possible, just as with first-class levers. Reversing R2 and R1 values is analogous to moving the fulcrum to its complementary position on the lever: one-third of the way from the output end. There, the output displacement will be one-half the input displacement:
8.07: Voltage-to-Current Signal Conversion
In instrumentation circuitry, DC signals are often used as analog representations of physical measurements such as temperature, pressure, flow, weight, and motion. Most commonly, DC current signals are used in preference to DC voltage signals, because current signals are exactly equal in magnitude throughout the series circuit loop carrying current from the source (measuring device) to the load (indicator, recorder, or controller), whereas voltage signals in a parallel circuit may vary from one end to the other due to resistive wire losses. Furthermore, current-sensing instruments typically have low impedances (while voltage-sensing instruments have high impedances), which gives current-sensing instruments greater electrical noise immunity.
In order to use current as an analog representation of a physical quantity, we have to have some way of generating a precise amount of current within the signal circuit. But how do we generate a precise current signal when we might not know the resistance of the loop? The answer is to use an amplifier designed to hold current to a prescribed value, applying as much or as little voltage as necessary to the load circuit to maintain that value. Such an amplifier performs the function of a current source. An op-amp with negative feedback is a perfect candidate for such a task:
The input voltage to this circuit is assumed to be coming from some type of physical transducer/amplifier arrangement, calibrated to produce 1 volt at 0 percent of physical measurement, and 5 volts at 100 percent of physical measurement. The standard analog current signal range is 4 mA to 20 mA, signifying 0% to 100% of measurement range, respectively. At 5 volts input, the 250 Ω (precision) resistor will have 5 volts applied across it, resulting in 20 mA of current in the large loop circuit (with Rload). It does not matter what resistance value Rload is, or how much wire resistance is present in that large loop, so long as the op-amp has a high enough power supply voltage to output the voltage necessary to get 20 mA flowing through Rload. The 250 Ω resistor establishes the relationship between input voltage and output current, in this case creating the equivalence of 1-5 V in / 4-20 mA out. If we were converting the 1-5 volt input signal to a 10-50 mA output signal (an older, obsolete instrumentation standard for industry), we’d use a 100 Ω precision resistor instead.
Another name for this circuit is transconductance amplifier. In electronics, transconductance is the mathematical ratio of current change divided by voltage change (ΔI / Δ V), and it is measured in the unit of Siemens, the same unit used to express conductance (the mathematical reciprocal of resistance: current/voltage). In this circuit, the transconductance ratio is fixed by the value of the 250 Ω resistor, giving a linear current-out/voltage-in relationship.
Review
• In industry, DC current signals are often used in preference to DC voltage signals as analog representations of physical quantities. Current in a series circuit is absolutely equal at all points in that circuit regardless of wiring resistance, whereas voltage in a parallel-connected circuit may vary from end to end because of wire resistance, making current-signaling more accurate from the “transmitting” to the “receiving” instrument.
• Voltage signals are relatively easy to produce directly from transducer devices, whereas accurate current signals are not. Op-amps can be used to “convert” a voltage signal into a current signal quite easily. In this mode, the op-amp will output whatever voltage is necessary to maintain current through the signaling circuit at the proper value. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/08%3A_Operational_Amplifiers/8.06%3A_An_Analogy_for_Divided_Feedback.txt |
If we take three equal resistors and connect one end of each to a common point, then apply three input voltages (one to each of the resistors’ free ends), the voltage seen at the common point will be the mathematical average of the three.
This circuit is really nothing more than a practical application of Millman’s Theorem:
This circuit is commonly known as a passive averager, because it generates an average voltage with non-amplifying components. Passive simply means that it is an unamplified circuit. The large equation to the right of the averager circuit comes from Millman’s Theorem, which describes the voltage produced by multiple voltage sources connected together through individual resistances. Since the three resistors in the averager circuit are equal to each other, we can simplify Millman’s formula by writing R1, R2, and R3 simply as R (one, equal resistance instead of three individual resistances):
If we take a passive averager and use it to connect three input voltages into an op-amp amplifier circuit with a gain of 3, we can turn this averaging function into an addition function. The result is called a noninverting summer circuit:
With a voltage divider composed of a 2 kΩ / 1 kΩ combination, the noninverting amplifier circuit will have a voltage gain of 3. By taking the voltage from the passive averager, which is the sum of V1, V2, and V3divided by 3, and multiplying that average by 3, we arrive at an output voltage equal to the sum of V1, V2, and V3:
Much the same can be done with an inverting op-amp amplifier, using a passive averager as part of the voltage divider feedback circuit. The result is called an inverting summer circuit:
Now, with the right-hand sides of the three averaging resistors connected to the virtual ground point of the op-amp’s inverting input, Millman’s Theorem no longer directly applies as it did before. The voltage at the virtual ground is now held at 0 volts by the op-amp’s negative feedback, whereas before it was free to float to the average value of V1, V2, and V3. However, with all resistor values equal to each other, the currents through each of the three resistors will be proportional to their respective input voltages. Since those three currents will add at the virtual ground node, the algebraic sum of those currents through the feedback resistor will produce a voltage at Vout equal to V1 + V2 + V3, except with reversed polarity. The reversal in polarity is what makes this circuit an inverting summer:
Summer (adder) circuits are quite useful in analog computer design, just as multiplier and divider circuits would be. Again, it is the extremely high differential gain of the op-amp which allows us to build these useful circuits with a bare minimum of components.
Review
• A summer circuit is one that sums, or adds, multiple analog voltage signals together. There are two basic varieties of op-amp summer circuits: noninverting and inverting.
8.09: Building a Differential Amplifier
Differential Op-Amp Circuits
An op-amp with no feedback is already a differential amplifier, amplifying the voltage difference between the two inputs. However, its gain cannot be controlled, and it is generally too high to be of any practical use. So far, our application of negative feedback to op-amps has resulting in the practical loss of one of the inputs, the resulting amplifier only good for amplifying a single voltage signal input. With a little ingenuity, however, we can construct an op-amp circuit maintaining both voltage inputs, yet with a controlled gain set by external resistors.
If all the resistor values are equal, this amplifier will have a differential voltage gain of 1. The analysis of this circuit is essentially the same as that of an inverting amplifier, except that the noninverting input (+) of the op-amp is at a voltage equal to a fraction of V2, rather than being connected directly to ground. As would stand to reason, V2 functions as the noninverting input and V1 functions as the inverting input of the final amplifier circuit. Therefore:
If we wanted to provide a differential gain of anything other than 1, we would have to adjust the resistances in both upper and lower voltage dividers, necessitating multiple resistor changes and balancing between the two dividers for symmetrical operation. This is not always practical, for obvious reasons.
Buffer the Input Voltage Signal
Another limitation of this amplifier design is the fact that its input impedances are rather low compared to that of some other op-amp configurations, most notably the noninverting (single-ended input) amplifier. Each input voltage source has to drive current through a resistance, which constitutes far less impedance than the bare input of an op-amp alone. The solution to this problem, fortunately, is quite simple. All we need to do is “buffer” each input voltage signal through a voltage follower like this:
Now the V1 and V2 input lines are connected straight to the inputs of two voltage-follower op-amps, giving very high impedance. The two op-amps on the left now handle the driving of current through the resistors instead of letting the input voltage sources (whatever they may be) do it. The increased complexity to our circuit is minimal for a substantial benefit.
8.10: The Instrumentation Amplifier
What Is an Instrumentation Amplifier?
An instrumentation amplifier allows an engineer to adjust the gain of an amplifier circuit without having to change more than one resistor value. Compare this to the differential amplifier, which we covered previously, which requires the adjustment of multiple resistor values.
The so-called instrumentation amplifier builds on the last version of the differential amplifier to give us that capability:
Understanding the Instrumentation Amplifier Circuit
This intimidating circuit is constructed from a buffered differential amplifier stage with three new resistors linking the two buffer circuits together. Consider all resistors to be of equal value except for Rgain.
The negative feedback of the upper-left op-amp causes the voltage at point 1 (top of Rgain) to be equal to V1. Likewise, the voltage at point 2 (bottom of Rgain) is held to a value equal to V2. This establishes a voltage drop across Rgain equal to the voltage difference between V1 and V2. That voltage drop causes a current through Rgain, and since the feedback loops of the two input op-amps draw no current, that same amount of current through Rgain must be going through the two “R” resistors above and below it.
This produces a voltage drop between points 3 and 4 equal to:
The regular differential amplifier on the right-hand side of the circuit then takes this voltage drop between points 3 and 4 and amplifies it by a gain of 1 (assuming again that all “R” resistors are of equal value).
Advantages of the Instrumentation Amplifier
Though this looks like a cumbersome way to build a differential amplifier, it has the distinct advantages of possessing extremely high input impedances on the V1 and V2 inputs (because they connect straight into the noninverting inputs of their respective op-amps), and adjustable gain that can be set by a single resistor.
Manipulating the above formula a bit, we have a general expression for overall voltage gain in the instrumentation amplifier:
Though it may not be obvious by looking at the schematic, we can change the differential gain of the instrumentation amplifier simply by changing the value of one resistor: Rgain.
Yes, we could still change the overall gain by changing the values of some of the other resistors, but this would necessitate balanced resistor value changes for the circuit to remain symmetrical. Please note that the lowest gain possible with the above circuit is obtained with Rgain completely open (infinite resistance), and that gain value is 1.
Review
• An instrumentation amplifier is a differential op-amp circuit providing high input impedances with ease of gain adjustment through the variation of a single resistor. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/08%3A_Operational_Amplifiers/8.08%3A_Averager_and_Summer_Circuits.txt |
By introducing electrical reactance into the feedback loops of op-amp amplifier circuits, we can cause the output to respond to changes in the input voltage over time. Drawing their names from their respective calculus functions, the integrator produces a voltage output proportional to the product (multiplication) of the input voltage and time; and the differentiator (not to be confused with differential) produces a voltage output proportional to the input voltage’s rate of change.
What is Capacitance?
Capacitance can be defined as the measure of a capacitor’s opposition to changes in voltage. The greater the capacitance, the more the opposition. Capacitors oppose voltage change by creating current in the circuit: that is, they either charge or discharge in response to a change in applied voltage. So, the more capacitance a capacitor has, the greater its charge or discharge current will be for any given rate of voltage change across it. The equation for this is quite simple:
The dv/dt fraction is a calculus expression representing the rate of voltage change over time. If the DC supply in the above circuit were steadily increased from a voltage of 15 volts to a voltage of 16 volts over a time span of 1 hour, the current through the capacitor would most likely be very small, because of the very low rate of voltage change (dv/dt = 1 volt / 3600 seconds). However, if we steadily increased the DC supply from 15 volts to 16 volts over a shorter time span of 1 second, the rate of voltage change would be much higher, and thus the charging current would be much higher (3600 times higher, to be exact). Same amount of change in voltage, but vastly different rates of change, resulting in vastly different amounts of current in the circuit.
To put some definite numbers to this formula, if the voltage across a 47 µF capacitor was changing at a linear rate of 3 volts per second, the current “through” the capacitor would be (47 µF)(3 V/s) = 141 µA.
We can build an op-amp circuit which measures change in voltage by measuring current through a capacitor, and outputs a voltage proportional to that current:
The Virtual Ground Effect
The right-hand side of the capacitor is held to a voltage of 0 volts, due to the “virtual ground” effect. Therefore, current “through” the capacitor is solely due to change in the input voltage. A steady input voltage won’t cause a current through C, but a changing input voltage will.
Capacitor current moves through the feedback resistor, producing a drop across it, which is the same as the output voltage. A linear, positive rate of input voltage change will result in a steady negative voltage at the output of the op-amp. Conversely, a linear, negative rate of input voltage change will result in a steady positive voltage at the output of the op-amp. This polarity inversion from input to output is due to the fact that the input signal is being sent (essentially) to the inverting input of the op-amp, so it acts like the inverting amplifier mentioned previously. The faster the rate of voltage change at the input (either positive or negative), the greater the voltage at the output.
The formula for determining voltage output for the differentiator is as follows:
Rate-of-Change Indicators for Process Instrumentation
Applications for this, besides representing the derivative calculus function inside of an analog computer, include rate-of-change indicators for process instrumentation. One such rate-of-change signal application might be for monitoring (or controlling) the rate of temperature change in a furnace, where too high or too low of a temperature rise rate could be detrimental. The DC voltage produced by the differentiator circuit could be used to drive a comparator, which would signal an alarm or activate a control if the rate of change exceeded a pre-set level.
In process control, the derivative function is used to make control decisions for maintaining a process at set point, by monitoring the rate of process change over time and taking action to prevent excessive rates of change, which can lead to an unstable condition. Analog electronic controllers use variations of this circuitry to perform the derivative function.
Integration
On the other hand, there are applications where we need precisely the opposite function, called integration in calculus. Here, the op-amp circuit would generate an output voltage proportional to the magnitude and duration that an input voltage signal has deviated from 0 volts. Stated differently, a constant input signal would generate a certain rate of change in the output voltage: differentiation in reverse. To do this, all we have to do is swap the capacitor and resistor in the previous circuit:
As before, the negative feedback of the op-amp ensures that the inverting input will be held at 0 volts (the virtual ground). If the input voltage is exactly 0 volts, there will be no current through the resistor, therefore no charging of the capacitor, and therefore the output voltage will not change. We cannot guarantee what voltage will be at the output with respect to ground in this condition, but we can say that the output voltage will be constant.
However, if we apply a constant, positive voltage to the input, the op-amp output will fall negative at a linear rate, in an attempt to produce the changing voltage across the capacitor necessary to maintain the current established by the voltage difference across the resistor. Conversely, a constant, negative voltage at the input results in a linear, rising (positive) voltage at the output. The output voltage rate-of-change will be proportional to the value of the input voltage.
Formula to Determine Voltage Output
The formula for determining voltage output for the integrator is as follows:
One application for this device would be to keep a “running total” of radiation exposure, or dosage, if the input voltage was a proportional signal supplied by an electronic radiation detector. Nuclear radiation can be just as damaging at low intensities for long periods of time as it is at high intensities for short periods of time. An integrator circuit would take both the intensity (input voltage magnitude) and time into account, generating an output voltage representing total radiation dosage.
Another application would be to integrate a signal representing water flow, producing a signal representing total quantity of water that has passed by the flowmeter. This application of an integrator is sometimes called a totalizer in the industrial instrumentation trade.
Review
• A differentiator circuit produces a constant output voltage for a steadily changing input voltage.
• An integrator circuit produces a steadily changing output voltage for a constant input voltage.
• Both types of devices are easily constructed, using reactive components (usually capacitors rather than inductors) in the feedback part of the circuit. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/08%3A_Operational_Amplifiers/8.11%3A_Differentiator_and_Integrator_Circuits.txt |
As we’ve seen, negative feedback is an incredibly useful principle when applied to operational amplifiers. It is what allows us to create all these practical circuits, being able to precisely set gains, rates, and other significant parameters with just a few changes of resistor values. Negative feedback makes all these circuits stable and self-correcting.
The basic principle of negative feedback is that the output tends to drive in a direction that creates a condition of equilibrium (balance). In an op-amp circuit with no feedback, there is no corrective mechanism, and the output voltage will saturate with the tiniest amount of differential voltage applied between the inputs. The result is a comparator:
With negative feedback (the output voltage “fed back” somehow to the inverting input), the circuit tends to prevent itself from driving the output to full saturation. Rather, the output voltage drives only as high or as low as needed to balance the two inputs’ voltages:
Whether the output is directly fed back to the inverting (-) input or coupled through a set of components, the effect is the same: the extremely high differential voltage gain of the op-amp will be “tamed” and the circuit will respond according to the dictates of the feedback “loop” connecting output to inverting input.
Another type of feedback, namely positive feedback, also finds application in op-amp circuits. Unlike negative feedback, where the output voltage is “fed back” to the inverting (-) input, with positive feedback the output voltage is somehow routed back to the noninverting (+) input. In its simplest form, we could connect a straight piece of wire from output to noninverting input and see what happens:
The inverting input remains disconnected from the feedback loop, and is free to receive an external voltage. Let’s see what happens if we ground the inverting input:
With the inverting input grounded (maintained at zero volts), the output voltage will be dictated by the magnitude and polarity of the voltage at the noninverting input. If that voltage happens to be positive, the op-amp will drive its output positive as well, feeding that positive voltage back to the noninverting input, which will result in full positive output saturation. On the other hand, if the voltage on the noninverting input happens to start out negative, the op-amp’s output will drive in the negative direction, feeding back to the noninverting input and resulting in full negative saturation.
What we have here is a circuit whose output is bistable: stable in one of two states (saturated positive or saturated negative). Once it has reached one of those saturated states, it will tend to remain in that state, unchanging. What is necessary to get it to switch states is a voltage placed upon the inverting (-) input of the same polarity, but of a slightly greater magnitude. For example, if our circuit is saturated at an output voltage of +12 volts, it will take an input voltage at the inverting input of at least +12 volts to get the output to change. When it changes, it will saturate fully negative.
So, an op-amp with positive feedback tends to stay in whatever output state its already in. It “latches” between one of two states, saturated positive or saturated negative. Technically, this is known as hysteresis.
Hysteresis can be a useful property for a comparator circuit to have. As we’ve seen before, comparators can be used to produce a square wave from any sort of ramping waveform (sine wave, triangle wave, sawtooth wave, etc.) input. If the incoming AC waveform is noise-free (that is, a “pure” waveform), a simple comparator will work just fine.
However, if there exist any anomalies in the waveform such as harmonics or “spikes” which cause the voltage to rise and fall significantly within the timespan of a single cycle, a comparator’s output might switch states unexpectedly:
Any time there is a transition through the reference voltage level, no matter how tiny that transition may be, the output of the comparator will switch states, producing a square wave with “glitches.”
If we add a little positive feedback to the comparator circuit, we will introduce hysteresis into the output. This hysteresis will cause the output to remain in its current state unless the AC input voltage undergoes a major change in magnitude.
What this feedback resistor creates is a dual-reference for the comparator circuit. The voltage applied to the noninverting (+) input as a reference which to compare with the incoming AC voltage changes depending on the value of the op-amp’s output voltage. When the op-amp output is saturated positive, the reference voltage at the noninverting input will be more positive than before. Conversely, when the op-amp output is saturated negative, the reference voltage at the noninverting input will be more negative than before. The result is easier to understand on a graph:
When the op-amp output is saturated positive, the upper reference voltage is in effect, and the output won’t drop to a negative saturation level unless the AC input rises above that upper reference level. Conversely, when the op-amp output is saturated negative, the lower reference voltage is in effect, and the output won’t rise to a positive saturation level unless the AC input drops below that lower reference level. The result is a clean square-wave output again, despite significant amounts of distortion in the AC input signal. In order for a “glitch” to cause the comparator to switch from one state to another, it would have to be at least as big (tall) as the difference between the upper and lower reference voltage levels, and at the right point in time to cross both those levels.
Another application of positive feedback in op-amp circuits is in the construction of oscillator circuits. An oscillator is a device that produces an alternating (AC), or at least pulsing, output voltage. Technically, it is known as an astable device: having no stable output state (no equilibrium whatsoever). Oscillators are very useful devices, and they are easily made with just an op-amp and a few external components.
When the output is saturated positive, the Vref will be positive, and the capacitor will charge up in a positive direction. When Vramp exceeds Vref by the tiniest margin, the output will saturate negative, and the capacitor will charge in the opposite direction (polarity). Oscillation occurs because the positive feedback is instantaneous and the negative feedback is delayed (by means of an RC time constant). The frequency of this oscillator may be adjusted by varying the size of any component.
Review
• Negative feedback creates a condition of equilibrium (balance). Positive feedback creates a condition of hysteresis (the tendency to “latch” in one of two extreme states).
• An oscillator is a device producing an alternating or pulsing output voltage. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/08%3A_Operational_Amplifiers/8.12%3A_Positive_Feedback.txt |
Real operational amplifiers have some imperfections compared to an “ideal” model. A real device deviates from a perfect difference amplifier. One minus one may not be zero. It may have have an offset like an analog meter which is not zeroed. The inputs may draw current. The characteristics may drift with age and temperature. Gain may be reduced at high frequencies, and phase may shift from input to output. These imperfection may cause no noticable errors in some applications, unacceptable errors in others. In some cases these errors may be compensated for. Sometimes a higher quality, higher cost device is required.
Common-Mode Gain
As stated before, an ideal differential amplifier only amplifies the voltage difference between its two inputs. If the two inputs of a differential amplifier were to be shorted together (thus ensuring zero potential difference between them), there should be no change in output voltage for any amount of voltage applied between those two shorted inputs and ground:
Voltage that is common between either of the inputs and ground, as “Vcommon-mode” is in this case, is called common-mode voltage. As we vary this common voltage, the perfect differential amplifier’s output voltage should hold absolutely steady (no change in output for any arbitrary change in common-mode input). This translates to a common-mode voltage gain of zero.
The operational amplifier, being a differential amplifier with high differential gain, would ideally have zero common-mode gain as well. In real life, however, this is not easily attained. Thus, common-mode voltages will invariably have some effect on the op-amp’s output voltage.
The performance of a real op-amp in this regard is most commonly measured in terms of its differential voltage gain (how much it amplifies the difference between two input voltages) versus its common-mode voltage gain (how much it amplifies a common-mode voltage). The ratio of the former to the latter is called the common-mode rejection ratio, abbreviated as CMRR:
An ideal op-amp, with zero common-mode gain would have an infinite CMRR. Real op-amps have high CMRRs, the ubiquitous 741 having something around 70 dB, which works out to a little over 3,000 in terms of a ratio.
Because the common mode rejection ratio in a typical op-amp is so high, common-mode gain is usually not a great concern in circuits where the op-amp is being used with negative feedback. If the common-mode input voltage of an amplifier circuit were to suddenly change, thus producing a corresponding change in the output due to common-mode gain, that change in output would be quickly corrected as negative feedback and differential gain (being much greater than common-mode gain) worked to bring the system back to equilibrium. Sure enough, a change might be seen at the output, but it would be a lot smaller than what you might expect.
A consideration to keep in mind, though, is common-mode gain in differential op-amp circuits such as instrumentation amplifiers. Outside of the op-amp’s sealed package and extremely high differential gain, we may find common-mode gain introduced by an imbalance of resistor values. To demonstrate this, we’ll run a SPICE analysis on an instrumentation amplifier with inputs shorted together (no differential voltage), imposing a common-mode voltage to see what happens. First, we’ll run the analysis showing the output voltage of a perfectly balanced circuit. We should expect to see no change in output voltage as the common-mode voltage changes:
Aside from very small deviations (actually due to quirks of SPICE rather than real behavior of the circuit), the output remains stable where it should be: at 0 volts, with zero input voltage differential. However, let’s introduce a resistor imbalance in the circuit, increasing the value of R5 from 10,000 Ω to 10,500 Ω, and see what happens (the netlist has been omitted for brevity—the only thing altered is the value of R5):
Our input voltage differential is still zero volts, yet the output voltage changes significantly as the common-mode voltage is changed. This is indicative of a common-mode gain, something we’re trying to avoid. More than that, its a common-mode gain of our own making, having nothing to do with imperfections in the op-amps themselves. With a much-tempered differential gain (actually equal to 3 in this particular circuit) and no negative feedback outside the circuit, this common-mode gain will go unchecked in an instrument signal application.
There is only one way to correct this common-mode gain, and that is to balance all the resistor values. When designing an instrumentation amplifier from discrete components (rather than purchasing one in an integrated package), it is wise to provide some means of making fine adjustments to at least one of the four resistors connected to the final op-amp to be able to “trim away” any such common-mode gain. Providing the means to “trim” the resistor network has additional benefits as well. Suppose that all resistor values are exactly as they should be, but a common-mode gain exists due to an imperfection in one of the op-amps. With the adjustment provision, the resistance could be trimmed to compensate for this unwanted gain.
One quirk of some op-amp models is that of output latch-up, usually caused by the common-mode input voltage exceeding allowable limits. If the common-mode voltage falls outside of the manufacturer’s specified limits, the output may suddenly “latch” in the high mode (saturate at full output voltage). In JFET-input operational amplifiers, latch-up may occur if the common-mode input voltage approaches too closely to the negative power supply rail voltage. On the TL082 op-amp, for example, this occurs when the common-mode input voltage comes within about 0.7 volts of the negative power supply rail voltage. Such a situation may easily occur in a single-supply circuit, where the negative power supply rail is ground (0 volts), and the input signal is free to swing to 0 volts.
Latch-up may also be triggered by the common-mode input voltage exceeding power supply rail voltages, negative or positive. As a rule, you should never allow either input voltage to rise above the positive power supply rail voltage, or sink below the negative power supply rail voltage, even if the op-amp in question is protected against latch-up (as are the 741 and 1458 op-amp models). At the very least, the op-amp’s behavior may become unpredictable. At worst, the kind of latch-up triggered by input voltages exceeding power supply voltages may be destructive to the op-amp.
While this problem may seem easy to avoid, its possibility is more likely than you might think. Consider the case of an operational amplifier circuit during power-up. If the circuit receives full input signal voltage beforeits own power supply has had time enough to charge the filter capacitors, the common-mode input voltage may easily exceed the power supply rail voltages for a short time. If the op-amp receives signal voltage from a circuit supplied by a different power source, and its own power source fails, the signal voltage(s) may exceed the power supply rail voltages for an indefinite amount of time!
Offset Voltage
Another practical concern for op-amp performance is voltage offset. That is, effect of having the output voltage something other than zero volts when the two input terminals are shorted together. Remember that operational amplifiers are differential amplifiers above all: they’re supposed to amplify the difference in voltage between the two input connections and nothing more. When that input voltage difference is exactly zero volts, we would (ideally) expect to have exactly zero volts present on the output. However, in the real world this rarely happens. Even if the op-amp in question has zero common-mode gain (infinite CMRR), the output voltage may not be at zero when both inputs are shorted together. This deviation from zero is called offset.
A perfect op-amp would output exactly zero volts with both its inputs shorted together and grounded. However, most op-amps off the shelf will drive their outputs to a saturated level, either negative or positive. In the example shown above, the output voltage is saturated at a value of positive 14.7 volts, just a bit less than +V (+15 volts) due to the positive saturation limit of this particular op-amp. Because the offset in this op-amp is driving the output to a completely saturated point, there’s no way of telling how much voltage offset is present at the output. If the +V/-V split power supply was of a high enough voltage, who knows, maybe the output would be several hundred volts one way or the other due to the effects of offset!
For this reason, offset voltage is usually expressed in terms of the equivalent amount of input voltage differential producing this effect. In other words, we imagine that the op-amp is perfect (no offset whatsoever), and a small voltage is being applied in series with one of the inputs to force the output voltage one way or the other away from zero. Being that op-amp differential gains are so high, the figure for “input offset voltage” doesn’t have to be much to account for what we see with shorted inputs:
Offset voltage will tend to introduce slight errors in any op-amp circuit. So how do we compensate for it? Unlike common-mode gain, there are usually provisions made by the manufacturer to trim the offset of a packaged op-amp. Usually, two extra terminals on the op-amp package are reserved for connecting an external “trim” potentiometer. These connection points are labeled offset null and are used in this general way:
On single op-amps such as the 741 and 3130, the offset null connection points are pins 1 and 5 on the 8-pin DIP package. Other models of op-amp may have the offset null connections located on different pins, and/or require a slightly difference configuration of trim potentiometer connection. Some op-amps don’t provide offset null pins at all! Consult the manufacturer’s specifications for details.
Bias Current
Inputs on an op-amp have extremely high input impedances. That is, the input currents entering or exiting an op-amp’s two input signal connections are extremely small. For most purposes of op-amp circuit analysis, we treat them as though they don’t exist at all. We analyze the circuit as though there was absolutely zero current entering or exiting the input connections.
This idyllic picture, however, is not entirely true. Op-amps, especially those op-amps with bipolar transistor inputs, have to have some amount of current through their input connections in order for their internal circuits to be properly biased. These currents, logically, are called bias currents. Under certain conditions, op-amp bias currents may be problematic. The following circuit illustrates one of those problem conditions:
At first glance, we see no apparent problems with this circuit. A thermocouple, generating a small voltage proportional to temperature (actually, a voltage proportional to the difference in temperature between the measurement junction and the “reference” junction formed when the alloy thermocouple wires connect with the copper wires leading to the op-amp) drives the op-amp either positive or negative. In other words, this is a kind of comparator circuit, comparing the temperature between the end thermocouple junction and the reference junction (near the op-amp). The problem is this: the wire loop formed by the thermocouple does not provide a path for both input bias currents, because both bias currents are trying to go the same way (either into the op-amp or out of it).
In order for this circuit to work properly, we must ground one of the input wires, thus providing a path to (or from) ground for both currents:
Not necessarily an obvious problem, but a very real one!
Another way input bias currents may cause trouble is by dropping unwanted voltages across circuit resistances. Take this circuit for example:
We expect a voltage follower circuit such as the one above to reproduce the input voltage precisely at the output. But what about the resistance in series with the input voltage source? If there is any bias current through the noninverting (+) input at all, it will drop some voltage across Rin, thus making the voltage at the noninverting input unequal to the actual Vin value. Bias currents are usually in the microamp range, so the voltage drop across Rin won’t be very much, unless Rin is very large. One example of an application where the input resistance (Rin) would be very large is that of pH probe electrodes, where one electrode contains an ion-permeable glass barrier (a very poor conductor, with millions of Ω of resistance).
If we were actually building an op-amp circuit for pH electrode voltage measurement, we’d probably want to use a FET or MOSFET (IGFET) input op-amp instead of one built with bipolar transistors (for less input bias current). But even then, what slight bias currents may remain can cause measurement errors to occur, so we have to find some way to mitigate them through good design.
One way to do so is based on the assumption that the two input bias currents will be the same. In reality, they are often close to being the same, the difference between them referred to as the input offset current. If they are the same, then we should be able to cancel out the effects of input resistance voltage drop by inserting an equal amount of resistance in series with the other input, like this:
With the additional resistance added to the circuit, the output voltage will be closer to Vin than before, even if there is some offset between the two input currents.
For both inverting and noninverting amplifier circuits, the bias current compensating resistor is placed in series with the noninverting (+) input to compensate for bias current voltage drops in the divider network:
In either case, the compensating resistor value is determined by calculating the parallel resistance value of R1 and R2. Why is the value equal to the parallel equivalent of R1 and R2? When using the Superposition Theorem to figure how much voltage drop will be produced by the inverting (-) input’s bias current, we treat the bias current as though it were coming from a current source inside the op-amp and short-circuit all voltage sources (Vin and Vout). This gives two parallel paths for bias current (through R1 and through R2, both to ground). We want to duplicate the bias current’s effect on the noninverting (+) input, so the resistor value we choose to insert in series with that input needs to be equal to R1 in parallel with R2.
A related problem, occasionally experienced by students just learning to build operational amplifier circuits, is caused by a lack of a common ground connection to the power supply. It is imperative to proper op-amp function that some terminal of the DC power supply be common to the “ground” connection of the input signal(s). This provides a complete path for the bias currents, feedback current(s), and for the load (output) current. Take this circuit illustration, for instance, showing a properly grounded power supply:
Here, arrows denote the path of electron flow through the power supply batteries, both for powering the op-amp’s internal circuitry (the “potentiometer” inside of it that controls output voltage), and for powering the feedback loop of resistors R1 and R2. Suppose, however, that the ground connection for this “split” DC power supply were to be removed. The effect of doing this is profound:
No electrons may flow in or out of the op-amp’s output terminal, because the pathway to the power supply is a “dead end.” Thus, no electrons flow through the ground connection to the left of R1, neither through the feedback loop. This effectively renders the op-amp useless: it can neither sustain current through the feedback loop, nor through a grounded load, since there is no connection from any point of the power supply to ground.
The bias currents are also stopped, because they rely on a path to the power supply and back to the input source through ground. The following diagram shows the bias currents (only), as they go through the input terminals of the op-amp, through the base terminals of the input transistors, and eventually through the power supply terminal(s) and back to ground.
Without a ground reference on the power supply, the bias currents will have no complete path for a circuit, and they will halt. Since bipolar junction transistors are current-controlled devices, this renders the input stage of the op-amp useless as well, as both input transistors will be forced into cutoff by the complete lack of base current.
Review
• Op-amp inputs usually conduct very small currents, called bias currents, needed to properly bias the first transistor amplifier stage internal to the op-amps’ circuitry. Bias currents are small (in the microamp range), but large enough to cause problems in some applications.
• Bias currents in both inputs must have paths to flow to either one of the power supply “rails” or to ground. It is not enough to just have a conductive path from one input to the other.
• To cancel any offset voltages caused by bias current flowing through resistances, just add an equivalent resistance in series with the other op-amp input (called a compensating resistor). This corrective measure is based on the assumption that the two input bias currents will be equal.
• Any inequality between bias currents in an op-amp constitutes what is called an input offset current.
• It is essential for proper op-amp operation that there be a ground reference on some terminal of the power supply, to form complete paths for bias currents, feedback current(s), and load current.
Drift
Being semiconductor devices, op-amps are subject to slight changes in behavior with changes in operating temperature. Any changes in op-amp performance with temperature fall under the category of op-amp drift. Drift parameters can be specified for bias currents, offset voltage, and the like. Consult the manufacturer’s data sheet for specifics on any particular op-amp.
To minimize op-amp drift, we can select an op-amp made to have minimum drift, and/or we can do our best to keep the operating temperature as stable as possible. The latter action may involve providing some form of temperature control for the inside of the equipment housing the op-amp(s). This is not as strange as it may first seem. Laboratory-standard precision voltage reference generators, for example, are sometimes known to employ “ovens” for keeping their sensitive components (such as zener diodes) at constant temperatures. If extremely high accuracy is desired over the usual factors of cost and flexibility, this may be an option worth looking at.
Review
• Op-amps, being semiconductor devices, are susceptible to variations in temperature. Any variations in amplifier performance resulting from changes in temperature is known as drift. Drift is best minimized with environmental temperature control.
Frequency Response
With their incredibly high differential voltage gains, op-amps are prime candidates for a phenomenon known as feedback oscillation. You’ve probably heard the equivalent audio effect when the volume (gain) on a public-address or other microphone amplifier system is turned too high: that high pitched squeal resulting from the sound waveform “feeding back” through the microphone to be amplified again. An op-amp circuit can manifest this same effect, with the feedback happening electrically rather than audibly.
A case example of this is seen in the 3130 op-amp, if it is connected as a voltage follower with the bare minimum of wiring connections (the two inputs, output, and the power supply connections). The output of this op-amp will self-oscillate due to its high gain, no matter what the input voltage. To combat this, a small compensation capacitor must be connected to two specially-provided terminals on the op-amp. The capacitor provides a high-impedance path for negative feedback to occur within the op-amp’s circuitry, thus decreasing the AC gain and inhibiting unwanted oscillations. If the op-amp is being used to amplify high-frequency signals, this compensation capacitor may not be needed, but it is absolutely essential for DC or low-frequency AC signal operation.
Some op-amps, such as the model 741, have a compensation capacitor built in to minimize the need for external components. This improved simplicity is not without a cost: due to that capacitor’s presence inside the op-amp, the negative feedback tends to get stronger as the operating frequency increases (that capacitor’s reactance decreases with higher frequencies). As a result, the op-amp’s differential voltage gain decreases as frequency goes up: it becomes a less effective amplifier at higher frequencies.
Op-amp manufacturers will publish the frequency response curves for their products. Since a sufficiently high differential gain is absolutely essential to good feedback operation in op-amp circuits, the gain/frequency response of an op-amp effectively limits its “bandwidth” of operation. The circuit designer must take this into account if good performance is to be maintained over the required range of signal frequencies.
Review
• Due to capacitances within op-amps, their differential voltage gain tends to decrease as the input frequency increases. Frequency response curves for op-amps are available from the manufacturer.
Input to Output Phase Shift
In order to illustrate the phase shift from input to output of an operational amplifier (op-amp), the OPA227 was tested in our lab. The OPA227 was constructed in a typical non-inverting configuration (Figure below).
OPA227 Non-inverting stage
The circuit configuration calls for a signal gain of ≅34 V/V or ≅50 dB. The input excitation at Vsrc was set to 10 mVp, and three frequencies of interest: 2.2 kHz, 22 kHz, and 220 MHz. The OPA227’s open loop gain and phase curve vs. frequency is shown in Figure below.
AV and Φ vs. Frequency plot
To help predict the closed loop phase shift from input to output, we can use the open loop gain and phase curve. Since the circuit configuration calls for a closed loop gain, or 1/β, of ≅50 dB, the closed loop gain curve intersects the open loop gain curve at approximately 22 kHz. After this intersection, the closed loop gain curve rolls off at the typical 20 dB/decade for voltage feedback amplifiers, and follows the open loop gain curve.
What is actually at work here is the negative feedback from the closed loop modifies the open loop response. Closing the loop with negative feedback establishes a closed loop pole at 22 kHz. Much like the dominant pole in the open loop phase curve, we will expect phase shift in the closed loop response. How much phase shift will we see?
Since the new pole is now at 22 kHz, this is also the -3 dB point as the pole starts to roll off the closed loop again at 20 dB per decade as stated earlier. As with any pole in basic control theory, phase shift starts to occur one decade in frequency before the pole, and ends at 90o of phase shift one decade in frequency after the pole. So what does this predict for the closed loop response in our circuit?
This will predict phase shift starting at 2.2 kHz, with 45o of phase shift at the -3 dB point of 22 kHz, and finally ending with 90o of phase shift at 220 kHz. The three Figures shown below are oscilloscope captures at the frequencies of interest for our OPA227 circuit. Figure below is set for 2.2 kHz, and no noticeable phase shift is present. Figure below is set for 220 kHz, and ≅45o of phase shift is recorded. Finally, Figure below is set for 220 MHz, and the expected ≅90o of phase shift is recorded. The scope plots were captured using a LeCroy 44x Wavesurfer. The final scope plot used a x1 probe with the trigger set to HF reject.
OPA227 Av=50dB @ 2.2 kHz
OPA227 Av=50dB @ 22 kHz
OPA227 Av=50dB @ 220 kHz | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/08%3A_Operational_Amplifiers/8.13%3A_Op-Amp_Practical_Considerations.txt |
While mention of operational amplifiers typically provokes visions of semiconductor devices built as integrated circuits on a miniature silicon chip, the first op-amps were actually vacuum tube circuits. The first commercial, general purpose operational amplifier was manufactured by the George A. Philbrick Researches, Incorporated, in 1952. Designated the K2-W, it was built around two twin-triode tubes mounted in an assembly with an octal (8-pin) socket for easy installation and servicing in electronic equipment chassis of that era. The assembly looked something like this:
The schematic diagram shows the two tubes, along with ten resistors and two capacitors, a fairly simple circuit design even by 1952 standards:
In case you’re unfamiliar with the operation of vacuum tubes, they operate similarly to N-channel depletion-type IGFET transistors: that is, they conduct more current when the control grid (the dashed line) is made more positive with respect to the cathode (the bent line near the bottom of the tube symbol), and conduct less current when the control grid is made less positive (or more negative) than the cathode. The twin triode tube on the left functions as a differential pair, converting the differential inputs (inverting and noninverting input voltage signals) into a single, amplified voltage signal which is then fed to the control grid of the left triode of the second triode pair through a voltage divider (1 MΩ—2.2 MΩ). That triode amplifies and inverts the output of the differential pair for a larger voltage gain, then the amplified signal is coupled to the second triode of the same dual-triode tube in a noninverting amplifier configuration for a larger current gain. The two neon “glow tubes” act as voltage regulators, similar to the behavior of semiconductor zener diodes, to provide a bias voltage in the coupling between the two single-ended amplifier triodes.
With a dual-supply voltage of +300/-300 volts, this op-amp could only swing its output +/- 50 volts, which is very poor by today’s standards. It had an open-loop voltage gain of 15,000 to 20,000, a slew rate of +/- 12 volts/µsecond, a maximum output current of 1 mA, a quiescent power consumption of over 3 watts (not including power for the tubes’ filaments!), and cost about \$24 in 1952 dollars. Better performance could have been attained using a more sophisticated circuit design, but only at the expense of greater power consumption, greater cost, and decreased reliability.
With the advent of solid-state transistors, op-amps with far less quiescent power consumption and increased reliability became feasible, but many of the other performance parameters remained about the same. Take for instance Philbrick’s model P55A, a general-purpose solid-state op-amp circa 1966. The P55A sported an open-loop gain of 40,000, a slew rate of 1.5 volt/µsecond and an output swing of +/- 11 volts (at a power supply voltage of +/- 15 volts), a maximum output current of 2.2 mA, and a cost of \$49 (or about \$21 for the “utility grade” version). The P55A, as well as other op-amps in Philbrick’s lineup of the time, was of discrete-component construction, its constituent transistors, resistors, and capacitors housed in a solid “brick” resembling a large integrated circuit package.
It isn’t very difficult to build a crude operational amplifier using discrete components. A schematic of one such circuit is shown in Figure below.
A simple operational amplifier made from discrete components.
While its performance is rather dismal by modern standards, it demonstrates that complexity is not necessary to create a minimally functional op-amp. Transistors Q3 and Q4 form the heart of another differential pair circuit, the semiconductor equivalent of the first triode tube in the K2-W schematic. As it was in the vacuum tube circuit, the purpose of a differential pair is to amplify and convert a differential voltage between the two input terminals to a single-ended output voltage.
With the advent of integrated-circuit (IC) technology, op-amp designs experienced a dramatic increase in performance, reliability, density, and economy. Between the years of 1964 and 1968, the Fairchild corporation introduced three models of IC op-amps: the 702, 709, and the still-popular 741. While the 741 is now considered outdated in terms of performance, it is still a favorite among hobbyists for its simplicity and fault tolerance (short-circuit protection on the output, for instance). Personal experience abusing many 741 op-amps has led me to the conclusion that it is a hard chip to kill . . .
The internal schematic diagram for a model 741 op-amp is shown in Figure below.
Schematic diagram of a model 741 op-amp.
By integrated circuit standards, the 741 is a very simple device: an example of small-scale integration, or SSI technology. It would be no small matter to build this circuit using discrete components, so you can see the advantages of even the most primitive integrated circuit technology over discrete components where high parts counts are involved.
For the hobbyist, student, or engineer desiring greater performance, there are literally hundreds of op-amp models to choose from. Many sell for less than a dollar apiece, even retail! Special-purpose instrumentation and radio-frequency (RF) op-amps may be quite a bit more expensive. In this section I will showcase several popular and affordable op-amps, comparing and contrasting their performance specifications. The venerable 741 is included as a “benchmark” for comparison, although it is, as I said before, considered an obsolete design.
Listed in Table above are but a few of the low-cost operational amplifier models widely available from electronics suppliers. Most of them are available through retail supply stores such as Radio Shack. All are under \$1.00 cost direct from the manufacturer (year 2001 prices). As you can see, there is substantial variation in performance between some of these units. Take for instance the parameter of input bias current: the CA3130 wins the prize for lowest, at 0.05 nA (or 50 pA), and the LM833 has the highest at slightly over 1 µA. The model CA3130 achieves its incredibly low bias current through the use of MOSFET transistors in its input stage. One manufacturer advertises the 3130’s input impedance as 1.5 tera-ohms, or 1.5 x 1012 Ω! Other op-amps shown here with low bias current figures use JFET input transistors, while the high bias current models use bipolar input transistors.
While the 741 is specified in many electronic project schematics and showcased in many textbooks, its performance has long been surpassed by other designs in every measure. Even some designs originally based on the 741 have been improved over the years to far surpass original design specifications. One such example is the model 1458, two op-amps in an 8-pin DIP package, which at one time had the exact same performance specifications as the single 741. In its latest incarnation it boasts a wider power supply voltage range, a slew rate 50 times as great, and almost twice the output current capability of a 741, while still retaining the output short-circuit protection feature of the 741. Op-amps with JFET and MOSFET input transistors far exceed the 741’s performance in terms of bias current, and generally manage to beat the 741 in terms of bandwidth and slew rate as well.
My own personal recommendations for op-amps are as such: when low bias current is a priority (such as in low-speed integrator circuits), choose the 3130. For general-purpose DC amplifier work, the 1458 offers good performance (and you get two op-amps in the space of one package). For an upgrade in performance, choose the model 353, as it is a pin-compatible replacement for the 1458. The 353 is designed with JFET input circuitry for very low bias current, and has a bandwidth 4 times are great as the 1458, although its output current limit is lower (but still short-circuit protected). It may be more difficult to find on the shelf of your local electronics supply house, but it is just as reasonably priced as the 1458.
If low power supply voltage is a requirement, I recommend the model 324, as it functions on as low as 3 volts DC. Its input bias current requirements are also low, and it provides four op-amps in a single 14-pin chip. Its major weakness is speed, limited to 1 MHz bandwidth and an output slew rate of only 0.25 volts per µs. For high-frequency AC amplifier circuits, the 318 is a very good “general purpose” model.
Special-purpose op-amps are available for modest cost which provide better performance specifications. Many of these are tailored for a specific type of performance advantage, such as maximum bandwidth or minimum bias current. Take for instance the op-amps, both designed for high bandwidth in Table below.
The CLC404 lists at \$21.80 (almost as much as George Philbrick’s first commercial op-amp, albeit without correction for inflation), while the CLC425 is quite a bit less expensive at \$3.23 per unit. In both cases high speed is achieved at the expense of high bias currents and restrictive power supply voltage ranges. Some op-amps, designed for high power output are listed in Table below.
Yes, the LM12CL actually has an output current rating of 13 amps (13,000 milliamps)! It lists at \$14.40, which is not a lot of money, considering the raw power of the device. The LM7171, on the other hand, trades high current output ability for fast voltage output ability (a high slew rate). It lists at \$1.19, about as low as some “general purpose” op-amps.
Amplifier packages may also be purchased as complete application circuits as opposed to bare operational amplifiers. The Burr-Brown and Analog Devices corporations, for example, both long known for their precision amplifier product lines, offer instrumentation amplifiers in pre-designed packages as well as other specialized amplifier devices. In designs where high precision and repeatability after repair is important, it might be advantageous for the circuit designer to choose such a pre-engineered amplifier “block” rather than build the circuit from individual op-amps. Of course, these units typically cost quite a bit more than individual op-amps.
8.15: Op-Amp Data
Parametrical data for all semiconductor op-amp models except the CA3130 comes from National Semiconductor’s online resources, available at this website: [*]. Data for the CA3130 comes from Harris Semiconductor’s CA3130/CA3130A datasheet (file number 817.4). | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/08%3A_Operational_Amplifiers/8.14%3A_Operational_Amplifier_Models.txt |
Volume I chapter 1.1 discusses static electricity, and how it is created. This has a lot more significance than might be first assumed, as control of static electricity plays a large part in modern electronics and other professions. An ElectroStatic Discharge event is when a static charge is bled off in an uncontrolled fashion and will be referred to as ESD hereafter.
ESD comes in many forms, it can be as small as 50 volts of electricity being equalized up to tens of thousands of volts. The actual power is extremely small, so small that no danger is generally offered to someone who is in the discharge path of ESD. It usually takes several thousand volts for a person to even notice ESD in the form of a spark and the familiar zap that accompanies it. The problem with ESD is even a small discharge that can go completely unnoticed can ruin semiconductors. A static charge of thousands of volts is common, however, the reason it is not a threat is there is no current of any substantial duration behind it. These extreme voltages do allow ionization of the air and allow other materials to break down, which is the root of where the damage comes from.
ESD is not a new problem. Black powder manufacturing and other pyrotechnic industries have always been dangerous if an ESD event occurs in the wrong circumstance. During the era of tubes (AKA valves) ESD was a nonexistent issue for electronics, but with the advent of semiconductors, and the increase in miniaturization, it has become much more serious.
Damage to components can, and usually do, occur when the part is in the ESD path. Many parts, such as power diodes, are very robust and can handle the discharge, but if a part has a small or thin geometry as part of their physical structure then the voltage can break down that part of the semiconductor. Currents during these events become quite high but are in the nanosecond to microsecond time frame. Part of the component is left permanently damaged by this, which can cause two types of failure modes. Catastrophic is the easy one, leaving the part completely nonfunctional. The other can be much more serious. Latent damage may allow the problem component to work for hours, days or even months after the initial damage before catastrophic failure. Many times these parts are referred to as “walking wounded” since they are working but bad. Figure below is shown an example of latent (“walking wounded”) ESD damage. If these components end up in a life support role, such as medical or military use, then the consequences can be grim. For most hobbyists, it is an inconvenience, but it can be an expensive one.
Even components that are considered fairly rugged can be damaged by ESD. Bipolar transistors, the earliest of the solid state amplifiers, are not immune, though less susceptible. Some of the newer high-speed components can be ruined with as little as 3 volts. There are components that might not be considered at risk, such as some specialized resistors and capacitors manufactured using MOS (Metal Oxide Semiconductor) technology, that can be damaged via ESD.
ESD Damage Prevention
Before ESD can be prevented it is important to understand what causes it. Generally, materials around the workbench can be broken up into 3 categories. These are ESD Generative, ESD Neutral, and ESD Dissipative (or ESD Conductive). ESD Generative materials are active static generators, such as most plastics, cat hair, and polyester clothing. ESD Neutral materials are generally insulative but don’t tend to generate or hold static charges very well. Examples of this include wood, paper, and cotton. This is not to say they can not be static generators or an ESD hazard, but the risk is somewhat minimized by other factors. Wood and wood products, for example, tend to hold moisture, which can make them slightly conductive. This is true of a lot of organic materials. A highly polished table would not fall under this category because the gloss is usually plastic, or varnish, which are highly efficient insulators. ESD Conductive materials are pretty obvious, they are the metal tools laying around. Plastic handles can be a problem, but the metal will bleed a static charge away as fast as it is generated if it is on a grounded surface. There are a lot of other materials, such as some plastics, that are designed to be conductive. They would fall under the heading of ESD Dissipative. Dirt and concrete are also conductive, and fall under the ESD Dissipative heading.
There are a lot of activities that generate static, which you need to be aware of as part of an ESD control regimen. The simple act of pulling the tape off a dispenser can generate an extreme voltage. Rolling around in a chair is another static generator, as is scratching. In fact, any activity that allows 2 or more surfaces to rub against each other is pretty certain to generate some static charge. This was mentioned in the beginning of this book, but real world examples can be subtle. This is why a method for continuously bleeding off this voltage is needed. Things that generate huge amounts of static should be avoided while working on components.
Plastic is usually associated with the generation of static. This has been gotten around in the form of conductive plastics. The usual way to make conductive plastic is an additive that changes the electrical characteristics of the plastic from an insulator to a conductor, although it will likely still have a resistance of millions of ohms per square inch. Plastics have been developed that can be used as conductors is in low weight applications, such as those in the airline industries. These are specialist applications and are not generally associated with ESD control.
It is not all bad news for ESD protection. The human body is a pretty decent conductor. High humidity in the air will also allow a static charge to dissipate harmlessly away, as well as making ESD Neutral materials more conductive. This is why cold winter days, where the humidity inside a house can be quite low, can increase the number of sparks on a doorknob. Summer, or rainy days, you would have to work quite hard to generate a substantial amount of static. Industry clean rooms and factory floors go the effort to regulate both temperature and humidity for this reason. Concrete floors are also conductive, so there may be some existing components in the home that can aid in setting up protections.
To establish ESD protection there has to be a standard voltage level that everything is referenced to. Such a level exists in the form of ground. There are very good safety reasons that ground is used around the house in outlets. In some ways, this relates to static, but not directly. It does give us a place to dump our excess electrons or acquire some if we are short, to neutralize any charges our bodies and tools might acquire. If everything on a workbench is connected directly or indirectly to ground via a conductor then static will dissipate long before an ESD event has a chance to occur.
A good grounding point can be made several different ways. In houses with modern wiring that is up to code the ground pin on the AC plug in can be used, or the screw that holds the outlets cover plate on. This is because house wiring actually has a wire or spike going into the earth somewhere where the power is tapped from the main power lines. For people whose house wiring isn’t quite right a spike driven into the earth at least 3 feet or a simple electrical connection to metal plumbing (worst option) can be used. The main thing is to establish an electrical path to the earth outside the house.
Ten megohms are considered a conductor in the world of ESD control. Static electricity is voltage with no real current, and if a charge is bled off seconds after being generated it is nullified. Generally, a 1 to 10 megohm resistor is used to connect any ESD protection for this reason. It has the benefit of slowing the discharge rate during an ESD event, which increases the likelihood of a component surviving undamaged. The faster the discharge, the higher the current spike going through the component. Another reason such a resistance is considered desirable is if the user is accidentally shorted to high voltage, such as household current, it won’t be the ESD protections that kill them.
A large industry has grown up around controlling ESD in the electronics industry. The staple of any electronics construction is the workbench with a static conductive or dissipative surface. This surface can be bought commercially, or home made in the form of a sheet of metal or foil. In the case of a metal surface, it might be a good idea to lay the thin paper on top, although it is not necessary if you are not doing any powered tests on the surface. The commercial version is usually some form of conductive plastic whose resistance is high enough not to be a problem, which is a better solution. If you are making your own surface for the workbench be sure to add the 10 megohm resistor to ground, otherwise you have no protection at all.
The other big item that needs ESD grounded is you. People are walking static generators. Your body being conductive it is relatively easy to ground it though, this is usually done with a wrist strap. Commercial versions already have the resistor built in and have a wide strap to offer a good contact surface with your skin. Disposable versions can be bought for a few dollars. A metal watch band is also a good ESD protection connection point. Just add a wire (with the resistor) to your grounding point. Most industries take the issue seriously enough to use real time monitors that will sound an alarm if the operator is not properly grounded.
Another way of grounding yourself is a heel strap. A conductive plastic part is wrapped around the heel of your shoe, with a conductive plastic strap going up and under your sock for good contact with the skin. It only works on floors with conductive wax or concrete. The method will keep a person from generating large charges that can overwhelm other ESD protections and is not considered adequate in and of itself. You can get the same effect by walking barefoot on a concrete floor.
Yet another ESD protection is to wear ESD conductive smocks. Like the heel strap, this is a secondary protection, not meant to replace the wrist strap. They are meant to short circuit any charges that your clothes may generate.
Moving air can also generate substantial static charges. When you blow the dust off your electronics there will be static generated. An industrial solution to the problem to this issue is two fold: Firstly, air guns have a small, well shielded radioactive material implanted within the air gun to ionize the air. Ionized air is a conductor, and will bleed off static charges quite well. Secondly, use high voltage electricity to ionize the air coming out of a fan, which has the same effect as the air gun. This will effectively help a workstation reduce the potential for ESD generation by a large amount.
Another ESD protection is the simplest of all, distance. Many industries have rules stating all Neutral and Generative materials will be at least 12 inches or more from any work in progress.
The user can also reduce the possibility of ESD damage by simply not removing the part out of its protective packaging until it is time to insert it into the circuit. This will reduce the likelihood of ESD exposure, and while the circuit will still be vulnerable, the component will have some minor protection from the rest of the components, as the other components will offer different discharge paths for ESD.
Storage and Transportation of ESD sensitive component and boards
It does no good to follow ESD protections on the workbench if the parts are being damaged while storing or carrying them. The most common method is to use a variation of a Faraday cage, an ESD bag. An ESD bag surrounds the component with a conductive shield and usually has a nonstatic generating insulative layer inside. In permanent Faraday cages this shield is grounded, as in the case of RFI rooms, but with portable containers, this isn’t practical. By putting an ESD bag on a grounded surface the same thing is accomplished. Faraday cages work by routing the electric charge around the contents and grounding them immediately. A car struck by lightning is an extreme example of a Faraday cage.
Static bags are by far the most common method of storing components and boards. They are made using extremely thin layers of metal, so thin as to be almost transparent. A bag with a hole, even small ones, or one that is not folded on top to seal the content from outside charges is ineffective.
Another method of protecting parts in storage is totes or tubes. In these cases, the parts are put into conductive boxes, with a lid of the same material. This effectively forms a Faraday cage. A tube is meant for ICs and other devices with a lot of pins, and stores the parts in a molded conductive plastic tube that keeps the parts safe both mechanically and electrically.
Conclusion
ESD can be a minor unfelt event measuring a few volts, or a massive event presenting real dangers to operators. All ESD protections can be overwhelmed by circumstance, but this can be circumvented by awareness of what it is and how to prevent it. Many projects have been built with no ESD protections at all and worked well. Given that protecting these projects is a minor inconvenience it is better to make the effort.
Industry takes the problem very seriously, as both a potential life threatening issue and a quality issue. Someone who buys an expensive piece of electronics or high tech hardware is not going to be happy if they have to return it in 6 months. When a reputation is on the line it is easier to do the right thing. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/09%3A_Practical_Analog_Semiconductor_Circuits/9.01%3A_ElectroStatic_Discharge.txt |
There are three major kinds of power supplies: unregulated (also called brute force), linear regulated, and switching. A fourth type of power supply circuit called the ripple-regulated, is a hybrid between the “brute force” and “switching” designs, and merits a subsection to itself.
Unregulated
An unregulated power supply is the most rudimentary type, consisting of a transformer, rectifier, and low-pass filter. These power supplies typically exhibit a lot of ripple voltage (i.e. rapidly-varying instability) and other AC “noise” superimposed on the DC power. If the input voltage varies, the output voltage will vary by a proportional amount. The advantage of an unregulated supply is that it’s cheap, simple, and efficient.
Linear regulated
A linear regulated supply is simply a “brute force” (unregulated) power supply followed by a transistor circuit operating in its “active,” or “linear” mode, hence the name linear regulator. (Obvious in retrospect, isn’t it?) A typical linear regulator is designed to output a fixed voltage for a wide range of input voltages, and it simply drops any excess input voltage to allow a maximum output voltage to the load. This excess voltage drop results in significant power dissipation in the form of heat. If the input voltage gets too low, the transistor circuit will lose regulation, meaning that it will fail to keep the voltage steady. It can only drop excess voltage, not make up for a deficiency in voltage from the brute force section of the circuit. Therefore, you have to keep the input voltage at least 1 to 3 volts higher than the desired output, depending on the regulator type. This means the power equivalent of at least 1 to 3 volts multiplied by the full load current will be dissipated by the regulator circuit, generating a lot of heat. This makes linear regulated power supplies rather inefficient. Also, to get rid of all that heat they have to use large heat sinks which make them large, heavy, and expensive.
Switching
A switching regulated power supply (“switcher”) is an effort to realize the advantages of both brute force and linear regulated designs (small, efficient, and cheap, but also “clean,” stable output voltage). Switching power supplies work on the principle of rectifying the incoming AC power line voltage into DC, re-converting it into high-frequency square-wave AC through transistors operated as on/off switches, stepping that AC voltage up or down by using a lightweight transformer, then rectifying the transformer’s AC output into DC and filtering for final output. Voltage regulation is achieved by altering the “duty cycle” of the DC-to-AC inversion on the transformer’s primary side. In addition to lighter weight because of a smaller transformer core, switchers have another tremendous advantage over the prior two designs: this type of power supply can be made so totally independent of the input voltage that it can work on any electric power system in the world; these are called “universal” power supplies.
The downside of switchers is that they are more complex, and due to their operation they tend to generate a lot of high-frequency AC “noise” on the power line. Most switchers also have significant ripple voltage on their outputs. With the cheaper types, this noise and ripple can be as bad as for an unregulated power supply; such low-end switchers aren’t worthless, because they still provide a stable average output voltage, and there’s the “universal” input capability.
Expensive switchers are ripple-free and have noise nearly as low as for some a linear types; these switchers tend to be as expensive as linear supplies. The reason to use an expensive switcher instead of a good linear is if you need universal power system compatibility or high efficiency. High efficiency, light weight, and small size are the reasons switching power supplies are almost universally used for powering digital computer circuitry.
Ripple regulated
A ripple-regulated power supply is an alternative to the linear regulated design scheme: a “brute force” power supply (transformer, rectifier, filter) constitutes the “front end” of the circuit, but a transistor operated strictly in it’s on/off (saturation/cutoff) modes transfers DC power to a large capacitor as needed to maintain the output voltage between a high and a low set point. As in switchers, the transistor in a ripple regulator never passes current while in its “active,” or “linear,” mode for any substantial length of time, meaning that very little energy will be wasted in the form of heat. However, the biggest drawback to this regulatory scheme is the necessary presence of some ripple voltage on the output, as the DC voltage varies between the two voltage control setpoints. Also, this ripple voltage varies in frequency depending on load current, which makes final filtering of the DC power more difficult.
Ripple regulator circuits tend to be quite a bit simpler than switcher circuitry, and they need not handle the high power line voltages that switcher transistors must handle, making them safer to work on.
9.03: Amplifier Circuits
Note, Q3 and Q4 in Figure below are complementary, NPN and PNP respectively. This circuit works well for moderate power audio amplifiers. For an explanation of this circuit see “Directly coupled complementary-pair,” Ch 4 .
Direct coupled complementary symmetry 3 w audio amplifier. After Mullard. [MUL]
9.04: Oscillator Circuits
Phase shift oscillator. R1C1, R2C2, and R3C3 each provide 60o of phase shift.
The phase shift oscillator of Figure above produces a sinewave output in the audio frequency range. Resistive feedback from the collector would be negative feedback due to 180o phasing (base to collector phase inversion). However, the three 60o RC phase shifters ( R1C1, R2C2, and R3C3) provide an additional 180o for a total of 360o. This in-phase feedback constitutes positive feedback. Oscillations result if transistor gain exceeds feedback network losses.
Varactor multiplier
A Varactor or variable capacitance diode with a nonlinear capacitance vs frequency characteristic distorts the applied sinewave f1 in Figure below, generating harmonics, f3.
Varactor diode, having a nonlinear capacitance vs voltage characteristic, serves in frequency multiplier.
The fundamental filter passes f1, blocking the harmonics from returning to the generator. The choke passes DC, and blocks radio frequencies (RF) from entering the Vbias supply. The harmonic filter passes the desired harmonic, say the 3rd, to the output, f3. The capacitor at the bottom of the inductor is a large value, low reactance, to block DC but ground the inductor for RF. The varicap diode in parallel with the indctor constitutes a parallel resonant network. It is tuned to the desired harmonic. Note that the reverse bias, Vbias, is fixed. The varicap multiplier is primarily used to generate microwave signals which cannot be directly produced by oscillators. The lumped circuit representation in Figure above is actually stripline or waveguide sections. Frequencies up to hundreds of gHz may be produced by varactor multipliers. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/09%3A_Practical_Analog_Semiconductor_Circuits/9.02%3A_Power_Supply_Circuits.txt |
(a) Crystal radio. (b) Modulated RF at antenna. (c) Rectified RF at diode cathode, without C2 filter capacitor. (d) Demodualted audio to headphones.
An antenna ground system, tank circuit, peak detector, and headphones are the the main components of a crystal radio. See Figure above (a). The antenna absorbs transimtted radio signals (b) which flow to ground via the other components. The combination of C1 and L1 comprise a resonant circuit, refered to as a tank circuit. Its purpose is to select one out of many available radios signals. The variable capacitor C1 allows for tuning to the various signals. The diode passes the positive half cycles of the RF, removing the negative half cycles (c). C2 is sized to filter the radio frequencies from the RF envelope (c), passing audio frequencies (d) to the headset. Note that no power supply is required for a crystal radio. A germanium diode, which has a lower forward voltage drop provides greater sensitvity than a silicon diode.
While 2000Ω magnetic headphones are shown above, a ceramic earphone, sometimes called a crystal earphone, is more sensitive. The ceramic earphone is desirable for all but the strongest radio signals
The circuit in Figure below produces a stronger output than the crystal detector. Since the transistor is not biased in the linear region (no base bias resistor), it only conducts for positive half cycles of RF input, detecting the audio modulation. An advantage of a transistor detector is amplification in addition to detection. This more powerful circuit can readily drive 2000Ω magnetic headphones. Note the transistor is a germanuim PNP device. This is probably more sensitive, due to the lower 0.2V VBE, compared with silicon. However, a silicon device should still work. Reverse battery polarity for NPN silicon devices.
TR One, one transistor radio. No-bias-resistor causes operation as a detector. After Stoner, Figure 4.4A. [DLS]
The 2000Ω headphones are no longer a widely available item. However, the low impedance earbuds commonly used with portable audio equipment may be substituted when paired with a suitable audio transformer. See Volume 6 Experiments, AC Circuits, Sensitive audio detector for details.
The circuit in Figure below adds an audio amplifier to the crystal detector for greater headphone volume. The original circuit used a germanium diode and transistor. [DLS] A schottky diode may be substituted for the germanium diode. A silicon transistor may be used if the base-bias resistor is changed according to the table.
Crystal radio with one transistor audio amplifer, base-bias. After Stoner, Figure 4.3A. [DLS]
For more crystal radio circuits, simple one-transistor radios, and more advanced low transistor count radios, see Wenzel [CW1]
Regency TR1: First mass produced transistor radio, 1954.
The circuit in Figure below is an integrated circuit AM radio containing all the active radio frequency circuitry within a single IC. All capacitors and inductors, along with a few resistors, are external to the IC. The 370 Pf variable capacitor tunes the desired RF signal. The 320 pF variable capacitor tunes the local oscillator 455 KHz above the RF input signal. The RF signal and local oscillator frequencies mix producing the sun and difference of the two at pin 15. The external 455 KHz ceramic filter between pins 15 and 12, selects the 455 KHz difference frequency. Most of the amplification is in the intermediate frequency (IF) amplifier between pins 12 and 7. A diode at pin 7 recovers audio from the IF. Some automatic gain control (AGC) is recovered and filtered to DC and fed back into pin 9.
IC radio, After Signetics [SIG]
Figure below shows conventional mecahnical tuning (a) of the RF input tuner and the local oscillator with varactor diode tuning (b). The meshed plates of a dual variable capacitor make for a bulky component. It is ecconomic to replace it with varicap tuning diodes. Increasing the reverse bias Vtune decreases capacitance which increases frequency. Vtune could be produced by a potentiometer.
IC radio comparison of (a) mechanical tuning to (b) electronic varicap diode tuning.[SIG]
Figure below shows an even lower parts count AM radio. Sony engineers have included the intermediate frequency (IF) bandpass filter within the 8-pin IC. This eliminates external IF transformers and an IF ceramic filter. L-C tuning components are still required for the radio frequency (RF) input and the local oscillator. Though, the variable capacitors could be replaced by varicap tuning diodes.
Compact IC radio eliminates external IF filters. After Sony [SNE]
Figure below shows a low-parts-count FM radio based on a TDA7021T integrated circuit by NXP Wireless. The bulky external IF filter transformers have been replaced by R-C filters. The resistors are integrated, the capacitors external. This circuit has been simplified from Figure 5 in the NXP Datasheet. See Figure 5 or 8 of the datasheet for the omitted signal strength circuit. The simple tuning circuit is from the Figure 5 Test Circuit. Figure 8 has a more elaborate tuner. Datasheet Figure 8 shows a stereo FM radio with an audio amplifier for driving a speaker. [NXP]
IC FM radio, signal strength circuit not shown. After NXP Wireless Figure 5. [NXP]
For a construction project, the simplified FM Radio in Figure above is recommended. For the 56nH inductor, wind 8 turns of #22 AWG bare wire or magnet wire on a 0.125 inch drill bit or other mandrel. Remove the mandrel and strech to 0.6 inch length. The tuning capacitor may be a miniature trimmer capacitor.
Figure below is an example of a common-base (CB) RF amplifier. It is a good illustration because it looks like a CB for lack of a bias network. Since there is no bias, this is a class C amplifier. The transistor conducts for less than 180o of the input signal because at least 0.7 V bias would be required for 180o class B. The common-base configuration has higher power gain at high RF frequencies than common-emitter. This is a power amplifier (3/4 W) as opposed to a small signal amplifier. The input and output π-networks match the emitter and collector to the 50 Ω input and output coaxial terminations, respectively. The output π-network also helps filter harmonics generated by the class C amplifier. Though, more sections would likely be required by modern radiated emissions standards.
Class C common-base 750 mW RF power amplifier. L1 = #10 Cu wire 1/2 turn, 5/8 in. ID by 3/4 in. high. L2 = #14 tinned Cu wire 1 1/2 turns, 1/2 in. ID by 1/3 in. spacing. After Texas Instruments [TX1]
An example of a high gain common-base RF amplifier is shown in Figure below. The common-base circuit can be pushed to a higher frequency than other configurations. This is a common base configuration because the transistor bases are grounded for AC by 1000 pF capacitors. The capacitors are necessary (unlike the class C, Figure previous) to allow the 1KΩ-4KΩ voltage divider to bias the transistor base for class A operation. The 500Ω resistors are emitter bias resistors. They stablize the collector current. The 850Ω resistors are collector DC loads. The three stage amplifier provides an overall gain of 38 dB at 100 MHz with a 9 MHz bandwidth.
Class A common-base small-signal high gain amplifier. After Texas Instruments [TX2]
A cascode amplifier has a wide bandwdth like a common-base amplifier and a moderately high input impedance like a common emitter arrangement. The biasing for this cascode amplifier (Figure below) is worked out in an example problem Ch 4 .
Class A cascode small-signal high gain amplifier.
This circuit (Figure above) is simulated in the “Cascode” section of the BJT chapter Ch 4 . Use RF or microwave transistors for best high frequency response.
PIN diode T/R switch disconnects receiver from antenna during transmit.
PIN diode antenna switch for direction finder receiver.
PIN diode attenuator: PIN diodes function as voltage variable resistors. After Lin [LCC].
The PIN diodes are arranged in a π-attenuator network. The anti-series diodes cancel some harmonic distortion compared with a single series diode. The fixed 1.25 V supply forward biases the parallel diodes, which not only conducting DC current from ground via the resistors, but also, conduct RF to ground through the diodes’ capacitors. The control voltage Vcontrol, increases current through the parallel diodes as it increases. This decreases the resistance and attenuation, passing more RF from input to output. Attenuation is about 3 dB at Vcontrol= 5 V. Attenuation is 40 dB at Vcontrol= 1 V with flat frequency response to 2 gHz. At Vcontrol= 0.5 V, attenuation is 80 dB at 10 MHz. However, the frequency response varies too much to use. [LCC] | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/09%3A_Practical_Analog_Semiconductor_Circuits/9.06%3A_Radio_Circuits.txt |
When someone mentions the word “computer,” a digital device is what usually comes to mind. Digital circuits represent numerical quantities in binary format: patterns of 1’s and 0’s represented by a multitude of transistor circuits operating in saturated or cutoff states. However, analog circuitry may also be used to represent numerical quantities and perform mathematical calculations, by using variable voltage signals instead of discrete on/off states.
Here is a simple example of binary (digital) representation versus analog representation of the number “twenty-five:”
Digital circuits are very different from circuits built on analog principles. Digital computational circuits can be incredibly complex, and calculations must often be performed in sequential “steps” to obtain a final answer, much as a human being would perform arithmetical calculations in steps with pencil and paper. Analog computational circuits, on the other hand, are quite simple in comparison, and perform their calculations in continuous, real-time fashion. There is a disadvantage to using analog circuitry to represent numbers, though: imprecision. The digital circuit shown above is representing the number twenty-five, precisely. The analog circuit shown above may or may not be exactly calibrated to 25.000 volts, but is subject to “drift” and error.
In applications where precision is not critical, analog computational circuits are very practical and elegant. Shown here are a few op-amp circuits for performing analog computation:
Each of these circuits may be used in modular fashion to create a circuit capable of multiple calculations. For instance, suppose that we needed to subtract a certain fraction of one variable from another variable. By combining a divide-by-constant circuit with a subtractor circuit, we could obtain the required function:
Devices called analog computers used to be common in universities and engineering shops, where dozens of op-amp circuits could be “patched” together with removable jumper wires to model mathematical statements, usually for the purpose of simulating some physical process whose underlying equations were known. Digital computers have made analog computers all but obsolete, but analog computational circuitry cannot be beaten by digital in terms of sheer elegance and economy of necessary components.
Analog computational circuitry excels at performing the calculus operations integration and differentiation with respect to time, by using capacitors in an op-amp feedback loop. To fully understand these circuits’ operation and applications, though, we must first grasp the meaning of these fundamental calculus concepts. Fortunately, the application of op-amp circuits to real-world problems involving calculus serves as an excellent means to teach basic calculus. In the words of John I. Smith, taken from his outstanding textbook, Modern Operational Circuit Design:
“A note of encouragement is offered to certain readers: integral calculus is one of the mathematical disciplines that operational [amplifier] circuitry exploits and, in the process, rather demolishes as a barrier to understanding.” (pg. 4)
Mr. Smith’s sentiments on the pedagogical value of analog circuitry as a learning tool for mathematics are not unique. Consider the opinion of engineer George Fox Lang, in an article he wrote for the August 2000 issue of the journal Sound and Vibration, entitled, “Analog was not a Computer Trademark!”:
“Creating a real physical entity (a circuit) governed by a particular set of equations and interacting with it provides unique insight into those mathematical statements. There is no better way to develop a “gut feel” for the interplay between physics and mathematics than to experience such an interaction. The analog computer was a powerful interdisciplinary teaching tool; its obsolescence is mourned by many educators in a variety of fields.” (pg. 23)
Differentiation is the first operation typically learned by beginning calculus students. Simply put, differentiation is determining the instantaneous rate-of-change of one variable as it relates to another. In analog differentiator circuits, the independent variable is time, and so the rates of change we’re dealing with are rates of change for an electronic signal (voltage or current) with respect to time.
Suppose we were to measure the position of a car, traveling in a direct path (no turns), from its starting point. Let us call this measurement, x. If the car moves at a rate such that its distance from “start” increases steadily over time, its position will plot on a graph as a linear function (straight line):
If we were to calculate the derivative of the car’s position with respect to time (that is, determine the rate-of-change of the car’s position with respect to time), we would arrive at a quantity representing the car’s velocity. The differentiation function is represented by the fractional notation d/d, so when differentiating position (x) with respect to time (t), we denote the result (the derivative) as dx/dt:
For a linear graph of x over time, the derivate of position (dx/dt), otherwise and more commonly known as velocity, will be a flat line, unchanging in value. The derivative of a mathematical function may be graphically understood as its slope when plotted on a graph, and here we can see that the position (x) graph has a constant slope, which means that its derivative (dx/dt) must be constant over time.
Now, suppose the distance traveled by the car increased exponentially over time: that is, it began its travel in slow movements, but covered more additional distance with each passing period in time. We would then see that the derivative of position (dx/dt), otherwise known as velocity (v), would not be constant over time, but would increase:
The height of points on the velocity graph correspond to the rates-of-change, or slope, of points at corresponding times on the position graph:
What does this have to do with analog electronic circuits? Well, if we were to have an analog voltage signal represent the car’s position (think of a huge potentiometer whose wiper was attached to the car, generating a voltage proportional to the car’s position), we could connect a differentiator circuit to this signal and have the circuit continuously calculate the car’s velocity, displaying the result via a voltmeter connected to the differentiator circuit’s output:
Recall from the last chapter that a differentiator circuit outputs a voltage proportional to the input voltage’s rate-of-change over time (d/dt). Thus, if the input voltage is changing over time at a constant rate, the output voltage will be at a constant value. If the car moves in such a way that its elapsed distance over time builds up at a steady rate, then that means the car is traveling at a constant velocity, and the differentiator circuit will output a constant voltage proportional to that velocity. If the car’s elapsed distance over time changes in a non-steady manner, the differentiator circuit’s output will likewise be non-steady, but always at a level representative of the input’s rate-of-change over time.
Note that the voltmeter registering velocity (at the output of the differentiator circuit) is connected in “reverse” polarity to the output of the op-amp. This is because the differentiator circuit shown is inverting: outputting a negative voltage for a positive input voltage rate-of-change. If we wish to have the voltmeter register a positive value for velocity, it will have to be connected to the op-amp as shown. As impractical as it may be to connect a giant potentiometer to a moving object such as an automobile, the concept should be clear: by electronically performing the calculus function of differentiation on a signal representing position, we obtain a signal representing velocity.
Beginning calculus students learn symbolic techniques for differentiation. However, this requires that the equation describing the original graph be known. For example, calculus students learn how to take a function such as y = 3x and find its derivative with respect to x (d/dx), 3, simply by manipulating the equation. We may verify the accuracy of this manipulation by comparing the graphs of the two functions:
Nonlinear functions such as y = 3x2 may also be differentiated by symbolic means. In this case, the derivative of y = 3x2 with respect to x is 6x:
In real life, though, we often cannot describe the behavior of any physical event by a simple equation like y = 3x, and so symbolic differentiation of the type learned by calculus students may be impossible to apply to a physical measurement. If someone wished to determine the derivative of our hypothetical car’s position (dx/dt = velocity) by symbolic means, they would first have to obtain an equation describing the car’s position over time, based on position measurements taken from a real experiment—a nearly impossible task unless the car is operated under carefully controlled conditions leading to a very simple position graph. However, an analog differentiator circuit, by exploiting the behavior of a capacitor with respect to voltage, current, and time i = C(dv/dt), naturally differentiates any real signal in relation to time, and would be able to output a signal corresponding to instantaneous velocity (dx/dt) at any moment. By logging the car’s position signal along with the differentiator’s output signal using a chart recorder or other data acquisition device, both graphs would naturally present themselves for inspection and analysis.
We may take the principle of differentiation one step further by applying it to the velocity signal using another differentiator circuit. In other words, use it to calculate the rate-of-change of velocity, which we know is the rate-of-change of position. What practical measure would we arrive at if we did this? Think of this in terms of the units we use to measure position and velocity. If we were to measure the car’s position from its starting point in miles, then we would probably express its velocity in units of miles per hour (dx/dt). If we were to differentiate the velocity (measured in miles per hour) with respect to time, we would end up with a unit of miles per hour per hour. Introductory physics classes teach students about the behavior of falling objects, measuring position in meters, velocity in meters per second, and change in velocity over time in meters per second, per second. This final measure is called acceleration: the rate of change of velocity over time:
The expression d2x/dt2 is called the second derivative of position (x) with regard to time (t). If we were to connect a second differentiator circuit to the output of the first, the last voltmeter would register acceleration:
Deriving velocity from position, and acceleration from velocity, we see the principle of differentiation very clearly illustrated. These are not the only physical measurements related to each other in this way, but they are, perhaps, the most common. Another example of calculus in action is the relationship between liquid flow (q) and liquid volume (v) accumulated in a vessel over time:
A “Level Transmitter” device mounted on a water storage tank provides a signal directly proportional to water level in the tank, which—if the tank is of constant cross-sectional area throughout its height—directly equates water volume stored. If we were to take this volume signal and differentiate it with respect to time (dv/dt), we would obtain a signal proportional to the water flow rate through the pipe carrying water to the tank. A differentiator circuit connected in such a way as to receive this volume signal would produce an output signal proportional to flow, possibly substituting for a flow-measurement device (“Flow Transmitter”) installed in the pipe.
Returning to the car experiment, suppose that our hypothetical car were equipped with a tachogenerator on one of the wheels, producing a voltage signal directly proportional to velocity. We could differentiate the signal to obtain acceleration with one circuit, like this:
By its very nature, the tachogenerator differentiates the car’s position with respect to time, generating a voltage proportional to how rapidly the wheel’s angular position changes over time. This provides us with a raw signal already representative of velocity, with only a single step of differentiation needed to obtain an acceleration signal. A tachogenerator measuring velocity, of course, is a far more practical example of automobile instrumentation than a giant potentiometer measuring its physical position, but what we gain in practicality we lose in position measurement. No matter how many times we differentiate, we can never infer the car’s position from a velocity signal. If the process of differentiation brought us from position to velocity to acceleration, then somehow we need to perform the “reverse” process of differentiation to go from velocity to position. Such a mathematical process does exist, and it is called integration. The “integrator” circuit may be used to perform this function of integration with respect to time:
Recall from the last chapter that an integrator circuit outputs a voltage whose rate-of-change over time is proportional to the input voltage’s magnitude. Thus, given a constant input voltage, the output voltage will change at a constant rate. If the car travels at a constant velocity (constant voltage input to the integrator circuit from the tachogenerator), then its distance traveled will increase steadily as time progresses, and the integrator will output a steadily changing voltage proportional to that distance. If the car’s velocity is not constant, then neither will the rate-of-change over time be of the integrator circuit’s output, but the output voltage will faithfully represent the amount of distance traveled by the car at any given point in time.
The symbol for integration looks something like a very narrow, cursive letter “S” (∫). The equation utilizing this symbol (∫v dt = x) tells us that we are integrating velocity (v) with respect to time (dt), and obtaining position (x) as a result.
So, we may express three measures of the car’s motion (position, velocity, and acceleration) in terms of velocity (v) just as easily as we could in terms of position (x):
If we had an accelerometer attached to the car, generating a signal proportional to the rate of acceleration or deceleration, we could (hypothetically) obtain a velocity signal with one step of integration, and a position signal with a second step of integration:
Thus, all three measures of the car’s motion (position, velocity, and acceleration) may be expressed in terms of acceleration:
As you might have suspected, the process of integration may be illustrated in, and applied to, other physical systems as well. Take for example the water storage tank and flow example shown earlier. If flow rate is the derivative of tank volume with respect to time (q = dv/dt), then we could also say that volume is the integralof flow rate with respect to time:
If we were to use a “Flow Transmitter” device to measure water flow, then by time-integration we could calculate the volume of water accumulated in the tank over time. Although it is theoretically possible to use a capacitive op-amp integrator circuit to derive a volume signal from a flow signal, mechanical and digital electronic “integrator” devices are more suitable for integration over long periods of time, and find frequent use in the water treatment and distribution industries.
Just as there are symbolic techniques for differentiation, there are also symbolic techniques for integration, although they tend to be more complex and varied. Applying symbolic integration to a real-world problem like the acceleration of a car, though, is still contingent on the availability of an equation precisely describing the measured signal—often a difficult or impossible thing to derive from measured data. However, electronic integrator circuits perform this mathematical function continuously, in real time, and for any input signal profile, thus providing a powerful tool for scientists and engineers.
Having said this, there are caveats to the using calculus techniques to derive one type of measurement from another. Differentiation has the undesirable tendency of amplifying “noise” found in the measured variable, since the noise will typically appear as frequencies much higher than the measured variable, and high frequencies by their very nature possess high rates-of-change over time.
To illustrate this problem, suppose we were deriving a measurement of car acceleration from the velocity signal obtained from a tachogenerator with worn brushes or commutator bars. Points of poor contact between brush and commutator will produce momentary “dips” in the tachogenerator’s output voltage, and the differentiator circuit connected to it will interpret these dips as very rapid changes in velocity. For a car moving at constant speed—neither accelerating nor decelerating—the acceleration signal should be 0 volts, but “noise” in the velocity signal caused by a faulty tachogenerator will cause the differentiated (acceleration) signal to contain “spikes,” falsely indicating brief periods of high acceleration and deceleration:
Noise voltage present in a signal to be differentiated need not be of significant amplitude to cause trouble: all that is required is that the noise profile have fast rise or fall times. In other words, any electrical noise with a high dv/dt component will be problematic when differentiated, even if it is of low amplitude.
It should be noted that this problem is not an artifact (an idiosyncratic error of the measuring/computing instrument) of the analog circuitry; rather, it is inherent to the process of differentiation. No matter how we might perform the differentiation, “noise” in the velocity signal will invariably corrupt the output signal. Of course, if we were differentiating a signal twice, as we did to obtain both velocity and acceleration from a position signal, the amplified noise signal output by the first differentiator circuit will be amplified again by the next differentiator, thus compounding the problem:
Integration does not suffer from this problem, because integrators act as low-pass filters, attenuating high-frequency input signals. In effect, all the high and low peaks resulting from noise on the signal become averaged together over time, for a diminished net result. One might suppose, then, that we could avoid all trouble by measuring acceleration directly and integrating that signal to obtain velocity; in effect, calculating in “reverse” from the way shown previously:
Unfortunately, following this methodology might lead us into other difficulties, one being a common artifact of analog integrator circuits known as drift. All op-amps have some amount of input bias current, and this current will tend to cause a charge to accumulate on the capacitor in addition to whatever charge accumulates as a result of the input voltage signal. In other words, all analog integrator circuits suffer from the tendency of having their output voltage “drift” or “creep” even when there is absolutely no voltage input, accumulating error over time as a result. Also, imperfect capacitors will tend to lose their stored charge over time due to internal resistance, resulting in “drift” toward zero output voltage. These problems are artifacts of the analog circuitry, and may be eliminated through the use of digital computation.
Circuit artifacts notwithstanding, possible errors may result from the integration of one measurement (such as acceleration) to obtain another (such as velocity) simply because of the way integration works. If the “zero” calibration point of the raw signal sensor is not perfect, it will output a slight positive or negative signal even in conditions when it should output nothing. Consider a car with an imperfectly calibrated accelerometer, or one that is influenced by gravity to detect a slight acceleration unrelated to car motion. Even with a perfect integrating computer, this sensor error will cause the integrator to accumulate error, resulting in an output signal indicating a change of velocity when the car is neither accelerating nor decelerating.
As with differentiation, this error will also compound itself if the integrated signal is passed on to another integrator circuit, since the “drifting” output of the first integrator will very soon present a significant positive or negative signal for the next integrator to integrate. Therefore, care should be taken when integrating sensor signals: if the “zero” adjustment of the sensor is not perfect, the integrated result will drift, even if the integrator circuit itself is perfect.
So far, the only integration errors discussed have been artificial in nature: originating from imperfections in the circuitry and sensors. There also exists a source of error inherent to the process of integration itself, and that is the unknown constant problem. Beginning calculus students learn that whenever a function is integrated, there exists an unknown constant (usually represented as the variable C) added to the result. This uncertainty is easiest to understand by comparing the derivatives of several functions differing only by the addition of a constant value:
Note how each of the parabolic curves (y = 3x2 + C) share the exact same shape, differing from each other in regard to their vertical offset. However, they all share the exact same derivative function: y’ = (d/dx)( 3x2 + C) = 6x, because they all share identical rates of change (slopes) at corresponding points along the x axis. While this seems quite natural and expected from the perspective of differentiation (different equations sharing a common derivative), it usually strikes beginning students as odd from the perspective of integration, because there are multiple correct answers for the integral of a function. Going from an equation to its derivative, there is only one answer, but going from that derivative back to the original equation leads us to a range of correct solutions. In honor of this uncertainty, the symbolic function of integration is called the indefinite integral.
When an integrator performs live signal integration with respect to time, the output is the sum of the integrated input signal over time and an initial value of arbitrary magnitude, representing the integrator’s pre-existing output at the time integration began. For example, if I integrate the velocity of a car driving in a straight line away from a city, calculating that a constant velocity of 50 miles per hour over a time of 2 hours will produce a distance (∫v dt) of 100 miles, that does not necessarily mean the car will be 100 miles away from the city after 2 hours. All it tells us is that the car will be 100 miles further away from the city after 2 hours of driving. The actual distance from the city after 2 hours of driving depends on how far the car was from the city when integration began. If we do not know this initial value for distance, we cannot determine the car’s exact distance from the city after 2 hours of driving.
This same problem appears when we integrate acceleration with respect to time to obtain velocity:
In this integrator system, the calculated velocity of the car will only be valid if the integrator circuit is initialized to an output value of zero when the car is stationary (v = 0). Otherwise, the integrator could very well be outputting a non-zero signal for velocity (v0) when the car is stationary, for the accelerometer cannot tell the difference between a stationary state (0 miles per hour) and a state of constant velocity (say, 60 miles per hour, unchanging). This uncertainty in integrator output is inherent to the process of integration, and not an artifact of the circuitry or of the sensor.
In summary, if maximum accuracy is desired for any physical measurement, it is best to measure that variable directly rather than compute it from other measurements. This is not to say that computation is worthless. Quite to the contrary, often it is the only practical means of obtaining a desired measurement. However, the limits of computation must be understood and respected in order that precise measurements be obtained.
9.08: Measurement Circuits
Figure below shows a photodiode amplifier for measuring low levels of light. Best sensitivity and bandwidth are obtained with a trans-impedance amplifier, a current to voltage amplifier, instead of a conventional operational amplifier. The photodiode remains to reverse biased for lowest diode capacitance, hence wider bandwidth, and lower noise. The feedback resistor sets the “gain”, the current to voltage amplification factor. Typical values are 1 to 10 Meg Ω. Higher values yield higher gain. A capacitor of a few pF may be required to compensate for photodiode capacitance and prevents instability at the high gain. The wiring at the summing node must be as compact as possible. This point is sensitive to circuit board contaminants and must be thoroughly cleaned. The most sensitive amplifiers contain the photodiode and amplifier within a hybrid microcircuit package or single die.
Photodiode amplifier. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/09%3A_Practical_Analog_Semiconductor_Circuits/9.07%3A_Computational_Circuits.txt |
Pulse Width Modulation (PWM) uses digital signals to control power applications, as well as being fairly easy to convert back to analog with a minimum of hardware.
Analog systems, such as linear power supplies, tend to generate a lot of heat since they are basically variable resistors carrying a lot of current. Digital systems don’t generally generate as much heat. Almost all the heat generated by a switching device is during the transition (which is done quickly), while the device is neither on nor off, but in between. This is because power follows the following formula:
P = E I, or Watts = Voltage X Current
If either voltage or current is near zero then power will be near zero. PWM takes full advantage of this fact.
PWM can have many of the characteristics of an analog control system, in that the digital signal can be free wheeling. PWM does not have to capture data, although there are exceptions to this with higher end controllers.
Duty Cycle
One of the parameters of any square wave is duty cycle. Most square waves are 50%, this is the norm when discussing them, but they don’t have to be symmetrical. The ON time can be varied completely between signal being off to being fully on, 0% to 100%, and all ranges between.
Shown below are examples of a 10%, 50%, and 90% duty cycle. While the frequency is the same for each, this is not a requirement.
The reason PWM is popular is simple. Many loads, such as resistors, integrate the power into a number matching the percentage. Conversion into its analog equivalent value is straightforward. LEDs are very nonlinear in their response to current, give an LED half its rated current and you still get more than half the light the LED can produce. With PWM the light level produced by the LED is very linear. Motors, which will be covered later, is also very responsive to PWM.
One of several ways PWM can be produced is by using a sawtooth waveform and a comparator. As shown below the sawtooth (or triangle wave) need not be symmetrical, but the linearity of the waveform is important. The frequency of the sawtooth waveform is the sampling rate for the signal.
If there isn’t any computation involved PWM can be fast. The limiting factor is the comparators frequency response. This may not be an issue since quite a few of the uses are fairly low speed. Some microcontrollers have PWM built in and can record or create signals on demand.
Uses for PWM vary widely. It is the heart of Class D audio amplifiers, by increasing the voltages you increase the maximum output, and by selecting a frequency beyond human hearing (typically 44Khz) PWM can be used. The speakers do not respond to the high frequency but duplicate the low frequency, which is the audio signal. Higher sampling rates can be used for even better fidelity, and 100Khz or much higher is not unheard of.
Another popular application is motor speed control. Motors as a class require very high currents to operate. Being able to vary their speed with PWM increases the efficiency of the total system by quite a bit. PWM is more effective at controlling motor speeds at low RPM than linear methods.
H-Bridges
PWM is often used in conjunction with an H-Bridge. This configuration is so named because it resembles the letter H, and allows the effective voltage across the load to be doubled since the power supply can be switched across both sides of the load. In the case of inductive loads, such as motors, diodes are used to suppress inductive spikes, which may damage the transistors. The inductance in a motor also tends to reject the high-frequency component of the waveform. This configuration can also be used with speakers for Class D audio amps.
While basically accurate, this schematic of an H-Bridge has one serious flaw, it is possible while transitioning between the MOSFETs that both transistors on top and bottom will be on simultaneously, and will take the full brunt of what the power supply can provide. This condition is referred to as shoot through and can happen with any type of transistor used in an H-Bridge. If the power supply is powerful enough the transistors will not survive. It is handled by using drivers in front of the transistors that allow one to turn off before allowing the other to turn on.
Switching Mode Power Supplies
Switching Mode Power Supplies (SMPS) can also use PWM, although other methods also exist. Adding topologies that use the stored power in both inductors and capacitors after the main switching components can boost the efficiencies for these devices quite high, exceeding 90% in some cases. Below is an example of such a configuration.
Efficiency, in this case, is measured as wattage. If you have an SMPS with 90% efficiency, and it converts 12VDC to 5VDC at 10 Amps, the 12V side will be pulling approximately 4.6 Amps. The 10% (5 watts) not accounted for will show up as waste heat. While being slightly noisier, this type of regulator will run much cooler than its linear counterpart. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/11%3A_DC_Motor_Drives/11.01%3A_Pulse_Width_Modulation.txt |
An often neglected area of study in modern electronics is that of tubes, more precisely known as vacuum tubes or electron tubes. Almost completely overshadowed by semiconductor, or “solid-state” components in most modern applications, tube technology once dominated electronic circuit design.
In fact, the historical transition from “electric” to “electronic” circuits really began with tubes, for it was with tubes that we entered into a whole new realm of circuit function: a way of controlling the flow of electrons (current) in a circuit by means of another electric signal (in the case of most tubes, the controlling signal is a small voltage). The semiconductor counterpart to the tube, of course, is the transistor. Transistors perform much the same function as tubes: controlling the flow of electrons in a circuit by means of another flow of electrons in the case of the bipolar transistor, and controlling the flow of electrons by means of a voltage in the case of the field-effect transistor. In either case, a relatively small electric signal controls a relatively large electric current. This is the essence of the word “electronic,” so as to distinguish it from “electric,” which has more to do with how electron flow is regulated by Ohm’s Law and the physical attributes of wire and components.
Though tubes are now obsolete for all but a few specialized applications, they are still a worthy area of study. If nothing else, it is fascinating to explore “the way things used to be done” in order to better appreciate modern technology.
13.02: Early Tube History
Thomas Edison, that prolific American inventor, is often credited with the invention of the incandescent lamp. More accurately, it could be said that Edison was the man who perfected the incandescent lamp. Edison’s successful design of 1879 was actually preceded by 77 years by the British scientist Sir Humphry Davy, who first demonstrated the principle of using electric current to heat a thin strip of metal (called a “filament”) to the point of incandescence (glowing white hot).
Edison was able to achieve his success by placing his filament (made of carbonized sewing thread) inside of a clear glass bulb from which the air had been forcibly removed. In this vacuum, the filament could glow at white-hot temperatures without being consumed by combustion:
In the course of his experimentation (sometime around 1883), Edison placed a strip of metal inside of an evacuated (vacuum) glass bulb along with the filament. Between this metal strip and one of the filament connections, he attached a sensitive ammeter. What he found was that electrons would flow through the meter whenever the filament was hot but ceased when the filament cooled down:
The white-hot filament in Edison’s lamp was liberating free electrons into the vacuum of the lamp, those electrons finding their way to the metal strip, through the galvanometer, and back to the filament. His curiosity piqued, Edison then connected a fairly high-voltage battery in the galvanometer circuit to aid the small current:
Sure enough, the presence of the battery created a much larger current from the filament to the metal strip. However, when the battery was turned around, there was little to no current at all!
In effect, what Edison had stumbled upon was a diode! Unfortunately, he saw no practical use for such a device and proceeded with further refinements in his lamp design.
The one-way electron flow of this device (known as the Edison Effect) remained a curiosity until J. A. Fleming experimented with its use in 1895. Fleming marketed his device as a “valve,” initiating a whole new area of study in electric circuits. Vacuum tube diodes—Fleming’s “valves” being no exception—are not able to handle large amounts of current, and so Fleming’s invention was impractical for any application in AC power, only for small electric signals.
Then in 1906, another inventor by the name of Lee De Forest started playing around with the “Edison Effect,” seeing what more could be gained from the phenomenon. In doing so, he made a startling discovery: by placing a metal screen between the glowing filament and the metal strip (which by now had taken the form of a plate for greater surface area), the stream of electrons flowing from filament to plate could be regulated by the application of a small voltage between the metal screen and the filament:
De Forest called this metal screen between filament and plate a grid. It wasn’t just the amount of voltage between grid and filament that controlled current from filament to plate, it was the polarity as well. A negative voltage applied to the grid with respect to the filament would tend to choke off the natural flow of electrons, whereas a positive voltage would tend to enhance the flow. Although there was some amount of current through the grid, it was very small; much smaller than the current through the plate.
Perhaps most importantly was his discovery that the small amounts of grid voltage and grid current were having large effects on the amount of plate voltage (with respect to the filament) and plate current. In adding the grid to Fleming’s “valve,” De Forest had made the valve adjustable: it now functioned as an amplifying device, whereby a small electrical signal could take control over a larger electrical quantity.
The closest semiconductor equivalent to the Audion tube, and to all of its more modern tube equivalents, is an n-channel D-type MOSFET. It is a voltage-controlled device with a large current gain.
Calling his invention the “Audion,” he vigorously applied it to the development of communications technology. In 1912 he sold the rights to his Audion tube as a telephone signal amplifier to the American Telephone and Telegraph Company (AT and T), which made long-distance telephone communication practical. In the following year, he demonstrated the use of an Audion tube for generating radio-frequency AC signals. In 1915 he achieved the remarkable feat of broadcasting voice signals via radio from Arlington, Virginia to Paris, and in 1916 inaugurated the first radio news broadcast. Such accomplishments earned De Forest the title “Father of Radio” in America. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/13%3A_Electron_Tubes/13.01%3A_Introduction_to_Electron_Tubes.txt |
De Forest’s Audion tube came to be known as the triode tube because it had three elements: filament, grid, and plate (just as the “di” in the name diode refers to two elements, filament, and plate). Later developments in diode tube technology led to the refinement of the electron emitter: instead of using the filament directly as the emissive element, another metal strip called the cathode could be heated by the filament.
This refinement was necessary in order to avoid some undesired effects of an incandescent filament as an electron emitter. First, a filament experiences a voltage drop along its length, as current overcomes the resistance of the filament material and dissipates heat energy. This meant that the voltage potential between different points along the length of the filament wire and other elements in the tube would not be constant. For this and similar reasons, alternating current used as a power source for heating the filament wire would tend to introduce unwanted AC “noise” in the rest of the tube circuit. Furthermore, the surface area of a thin filament was limited at best, and limited surface area on the electron emitting element tends to place a corresponding limit on the tube’s current-carrying capacity.
The cathode was a thin metal cylinder fitting snugly over the twisted wire of the filament. The cathode cylinder would be heated by the filament wire enough to freely emit electrons, without the undesirable side effects of actually carrying the heating current as the filament wire had to. The tube symbol for a triode with an indirectly-heated cathode looks like this:
Since the filament is necessary for all but a few types of vacuum tubes, it is often omitted in the symbol for simplicity, or it may be included in the drawing but with no power connections drawn to it:
A simple triode circuit is shown to illustrate its basic operation as an amplifier:
The low-voltage AC signal connected between the grid and cathode alternately suppresses, then enhances the electron flow between cathode and plate. This causes a change in voltage on the output of the circuit (between plate and cathode). The AC voltage and current magnitudes on the tube’s grid are generally quite small compared to the variation of voltage and current in the plate circuit. Thus, the triode functions as an amplifier of the incoming AC signal (taking high-voltage, high-current DC power supplied from the large DC source on the right and “throttling” it by means of the tube’s controlled conductivity).
In the triode, the amount of current from cathode to plate (the “controlled” current is a function both of grid-to-cathode voltage (the controlling signal) and the plate-to-cathode voltage (the electromotive force available to push electrons through the vacuum). Unfortunately, neither of these independent variables have a purely linear effect on the amount of current through the device (often referred to simply as the “plate current”). That is, triode current does not necessarily respond in a direct, proportional manner to the voltages applied.
In this particular amplifier circuit, the nonlinearities are compounded, as plate voltage (with respect to cathode) changes along with the grid voltage (also with respect to cathode) as plate current is throttled by the tube. The result will be an output voltage waveform that doesn’t precisely resemble the waveform of the input voltage. In other words, the quirkiness of the triode tube and the dynamics of this particular circuit will distort the wave shape. If we really wanted to get complex about how we stated this, we could say that the tube introduces harmonics by failing to exactly reproduce the input waveform.
Another problem with triode behavior is that of stray capacitance. Remember that any time we have two conductive surfaces separated by an insulating medium, a capacitor will be formed. Any voltage between those two conductive surfaces will generate an electric field within that insulating region, potentially storing energy and introducing reactance into a circuit. Such is the case with the triode, most problematically between the grid and the plate. It is as if there were tiny capacitors connected between the pairs of elements in the tube:
Now, this stray capacitance is quite small, and the reactive impedances usually high. Usually, that is, unless radio frequencies are being dealt with. As we saw with De Forest’s Audion tube, radio was probably the prime application for this new technology, so these “tiny” capacitances became more than just a potential problem. Another refinement in tube technology was necessary to overcome the limitations of the triode. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/13%3A_Electron_Tubes/13.03%3A_The_Triode.txt |
As the name suggests, the tetrode tube contained four elements: cathode (with the implicit filament, or “heater”), grid, plate, and a new element called the screen. Similar in construction to the grid, the screen was a wire mesh or coil positioned between the grid and plate, connected to a source of positive DC potential (with respect to the cathode, as usual) equal to a fraction of the plate voltage. When connected to ground through an external capacitor, the screen had the effect of electrostatically shielding the grid from the plate. Without the screen, the capacitive linking between the plate and the grid could cause significant signal feedback at high frequencies, resulting in unwanted oscillations.
The screen, being of less surface area and lower positive potential than the plate, didn’t attract many of the electrons passing through the grid from the cathode, so the vast majority of electrons in the tube still flew by the screen to be collected by the plate:
With a constant DC screen voltage, electron flow from cathode to plate became almost exclusively dependent upon grid voltage, meaning the plate voltage could vary over a wide range with little effect on plate current. This made for more stable gains in amplifier circuits, and better linearity for more accurate reproduction of the input signal waveform.
Despite the advantages realized by the addition of a screen, there were some disadvantages as well. The most significant disadvantage was related to something known as secondary emission. When electrons from the cathode strike the plate at high velocity, they can cause free electrons to be jarred loose from atoms in the metal of the plate. These electrons knocked off the plate by the impact of the cathode electrons, are said to be “secondarily emitted.” In a triode tube, secondary emission is not that great a problem, but in a tetrode with a positively-charged screen grid in close proximity, these secondary electrons will be attracted to the screen rather than the plate from which they came, resulting in a loss of plate current. Less plate current means less gain for the amplifier, which is not good.
Two different strategies were developed to address this problem of the tetrode tube: beam power tubes and pentodes. Both solutions resulted in new tube designs with approximately the same electrical characteristics.
13.05: Beam Power Tubes
In the beam power tube, the basic four-element structure of the tetrode was maintained, but the grid and screen wires were carefully arranged along with a pair of auxiliary plates to create an interesting effect: focused beams or “sheets” of electrons traveling from cathode to plate. These electron beams formed a stationary “cloud” of electrons between the screen and plate (called a “space charge”) which acted to repel secondary electrons emitted from the plate back to the plate. A set of “beam-forming” plates, each connected to the cathode, were added to help maintain proper electron beam focus. Grid and screen wire coils were arranged in such a way that each turn or wrap of the screen fell directly behind a wrap of the grid, which placed the screen wires in the “shadow” formed by the grid. This precise alignment enabled the screen to still perform its shielding function with minimal interference to the passage of electrons from cathode to plate.
This resulted in lower screen current (and more plate current!) than an ordinary tetrode tube, with little added expense to the construction of the tube.
Beam power tetrodes were often distinguished from their non-beam counterparts by a different schematic symbol, showing the beam-forming plates:
13.06: The Pentode
Another strategy for addressing the problem of secondary electrons being attracted by the screen was the addition of a fifth wire element to the tube structure: a suppressor. These five-element tubes were naturally called pentodes.
The suppressor was another wire coil or mesh situated between the screen and the plate, usually connected directly to ground potential. In some pentode tube designs, the suppressor was internally connected to the cathode so as to minimize the number of connection pins having to penetrate the tube envelope:
The suppressor’s job was to repel any secondarily emitted electrons back to the plate: a structural equivalent of the beam power tube’s space charge. This, of course, increased plate current and decreased screen current, resulting in better gain and overall performance. In some instances, it allowed for greater operating plate voltage as well. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/13%3A_Electron_Tubes/13.04%3A_The_Tetrode.txt |
Similar in thought to the idea of the integrated circuit, tube designers tried integrating different tube functions into single tube envelopes to reduce space requirements in more modern tube-type electronic equipment. A common combination seen within a single glass shell was two either diodes or two triodes. The idea of fitting pairs of diodes inside a single envelope makes a lot of sense in light of power supply full-wave rectifier designs, always requiring multiple diodes.
Of course, it would have been quite impossible to combine thousands of tube elements into a single tube envelope the way that thousands of transistors can be etched onto a single piece of silicon, but engineers still did their best to push the limits of tube miniaturization and consolidation. Some of these tubes, whimsically called compactrons, held four or more complete tube elements within a single envelope.
Sometimes the functions of two different tubes could be integrated into a single, combination tube in a way that simply worked more elegantly than two tubes ever could. An example of this was the pentagrid converter, more generally called a heptode, used in some superheterodyne radio designs. These tubes contained seven elements: 5 grids plus a cathode and a plate. Two of the grids were normally reserved for signal input, the other three relegated to screening and suppression (performance-enhancing) functions. Combining the superheterodyne functions of oscillator and signal mixer together in one tube, the signal coupling between these two stages was intrinsic. Rather than having separate oscillator and mixer circuits, the oscillator creating an AC voltage and the mixer “mixing” that voltage with another signal, the pentagrid converter’s oscillator section created an electron stream that oscillated in intensity which then directly passed through another grid for “mixing” with another signal.
This same tube was sometimes used in a different way: by applying a DC voltage to one of the control grids, the gain of the tube could be changed for a signal impressed on the other control grid. This was known as variable-mu operation because the “mu” (µ) of the tube (its amplification factor, measured as a ratio of plate-to-cathode voltage change over grid-to-cathode voltage change with a constant plate current) could be altered at will by a DC control voltage signal.
Enterprising electronics engineers also discovered ways to exploit such multi-variable capabilities of “lesser” tubes such as tetrodes and pentodes. One such way was the so-called ultralinear audio power amplifier, invented by a pair of engineers named Hafler and Keroes, utilizing a tetrode tube in combination with a “tapped” output transformer to provide substantial improvements in amplifier linearity (decreases in distortion levels). Consider a “single-ended” triode tube amplifier with an output transformer coupling power to the speaker:
If we substitute a tetrode for a triode in this circuit, we will see improvements in circuit gain resulting from the electrostatic shielding offered by the screen, preventing unwanted feedback between the plate and control grid:
However, the tetrode’s screen may be used for functions other than merely shielding the grid from the plate. It can also be used as another control element, like the grid itself. If a “tap” is made on the transformer’s primary winding, and this tap connected to the screen, the screen will receive a voltage that varies with the signal being amplified (feedback). More specifically, the feedback signal is proportional to the rate-of-change of magnetic flux in the transformer core (dΦ/dt), thus improving the amplifier’s ability to reproduce the input signal waveform at the speaker terminals and not just in the primary winding of the transformer:
This signal feedback results in significant improvements in amplifier linearity (and consequently, distortion), so long as precautions are taken against “overpowering” the screen with too great a positive voltage with respect to the cathode. As a concept, the ultralinear (screen-feedback) design demonstrates the flexibility of operation granted by multiple grid-elements inside a single tube: a capability rarely matched by semiconductor components.
Some tube designs combined multiple tube functions in a most economic way: dual plates with a single cathode, the currents for each of the plates controlled by separate sets of control grids. Common examples of these tubes were triode-heptode and triode-hexode tubes (a hexode tube is a tube with four grids, one cathode, and one plate).
Other tube designs simply incorporated separate tube structures inside a single glass envelope for greater economy. Dual diode (rectifier) tubes were quite common, as were dual triode tubes, especially when the power dissipation of each tube was relatively low.
The 12AX7 and 12AU7 models are common examples of dual-triode tubes, both of low-power rating. The 12AX7 is especially common as a preamplifier tube in electric guitar amplifier circuits.
13.08: Tube Parameters
For bipolar junction transistors, the fundamental measure of amplification is the Beta ratio (β), defined as the ratio of collector current to base current (IC/IB). Other transistor characteristics such as junction resistance, which in some amplifier circuits may impact performance as much as β, are quantified for the benefit of circuit analysis. Electron tubes are no different, their performance characteristics having been explored and quantified long ago by electrical engineers.
Before we can speak meaningfully on these characteristics, we must define several mathematical variables used for expressing common voltage, current, and resistance measurements as well as some of the more complex quantities:
The two most basic measures of an amplifying tube’s characteristics are its amplification factor (µ) and its mutual conductance (gm), also known as transconductance. Transconductance is defined here just the same as it is for field-effect transistors, another category of voltage-controlled devices. Here are the two equations defining each of these performance characteristics:
Another important, though more abstract, measure of tube performance is its plate resistance. This is the measurement of plate voltage change over plate current change for a constant value of grid voltage. In other words, this is an expression of how much the tube acts like a resistor for any given amount of grid voltage, analogous to the operation of a JFET in its ohmic mode:
The astute reader will notice that plate resistance may be determined by dividing the amplification factor by the transconductance:
These three performance measures of tubes are subject to change from tube to tube (just as β ratios between two “identical” bipolar transistors are never precisely the same) and between different operating conditions. This variability is due partly to the unavoidable nonlinearities of electron tubes and partly due to how they are defined. Even supposing the existence of a perfectly linear tube, it will be impossible for all three of these measures to be constant over the allowable ranges of operation. Consider a tube that perfectly regulates current at any given amount of grid voltage (like a bipolar transistor with an absolutely constant β): that tube’s plate resistance must vary with plate voltage because plate current will not change even though plate voltage does.
Nevertheless, tubes were (and are) rated by these values at given operating conditions, and may have their characteristic curves published just like transistors. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/13%3A_Electron_Tubes/13.07%3A_Combination_Tubes.txt |
So far, we’ve explored tubes which are totally “evacuated” of all gas and vapor inside their glass envelopes, properly known as vacuum tubes. With the addition of certain gases or vapors, however, tubes take on significantly different characteristics, and are able to fulfill certain special roles in electronic circuits.
When a high enough voltage is applied across a distance occupied by a gas or vapor, or when that gas or vapor is heated sufficiently, the electrons of those gas molecules will be stripped away from their respective nuclei, creating a condition of ionization. Having freed the electrons from their electrostatic bonds to the atoms’ nuclei, they are free to migrate in the form of a current, making the ionized gas a relatively good conductor of electricity. In this state, the gas is more properly referred to as a plasma.
Ionized gas is not a perfect conductor. As such, the flow of electrons through ionized gas will tend to dissipate energy in the form of heat, thereby helping to keep the gas in a state of ionization. The result of this is a tube that will begin to conduct under certain conditions, then tend to stay in a state of conduction until the applied voltage across the gas and/or the heat-generating current drops to a minimum level.
The astute observer will note that this is precisely the kind of behavior exhibited by a class of semiconductor devices called “thyristors,” which tend to stay “on” once turned “on” and tend to stay “off” once turned “off.” Gas-filled tubes, it can be said, manifest this same property of hysteresis.
Unlike their vacuum counterparts, ionization tubes were often manufactured with no filament (heater) at all. These were called cold-cathode tubes, with the heated versions designated as hot-cathode tubes. Whether or not the tube contained a source of heat obviously impacted the characteristics of a gas-filled tube, but not to the extent that lack of heat would impact the performance of a hard-vacuum tube.
The simplest type of ionization device is not necessarily a tube at all; rather, it is constructed of two electrodes separated by a gas-filled gap. Simply called a spark gap, the gap between the electrodes may be occupied by ambient air, other times a special gas, in which case the device must have a sealed envelope of some kind.
A prime application for spark gaps is in overvoltage protection. Engineered not to ionize, or “break down” (begin conducting), with normal system voltage applied across the electrodes, the spark gap’s function is to conduct in the event of a significant increase in voltage. Once conducting, it will act as a heavy load, holding the system voltage down through its large current draw and subsequent voltage drop along conductors and other series impedances. In a properly engineered system, the spark gap will stop conducting (“extinguish”) when the system voltage decreases to a normal level, well below the voltage required to initiate conduction.
One major caveat of spark gaps is their significantly finite life. The discharge generated by such a device can be quite violent, and as such will tend to deteriorate the surfaces of the electrodes through pitting and/or melting.
Spark gaps can be made to conduct on command by placing a third electrode (usually with a sharp edge or point) between the other two and applying a high voltage pulse between that electrode and one of the other electrodes. The pulse will create a small spark between the two electrodes, ionizing part of the pathway between the two large electrodes, and enabling conduction between them if the applied voltage is high enough:
Spark gaps of both the triggered and untriggered variety can be built to handle huge amounts of current, some even into the range of mega-amps (millions of amps)! Physical size is the primary limiting factor to the amount of current a spark gap can safely and reliably handle.
When the two main electrodes are placed in a sealed tube filled with a special gas, a discharge tube is formed. The most common type of discharge tube is the neon light, used popularly as a source of colorful illumination, the color of the light emitted being dependent on the type of gas filling the tube.
Construction of neon lamps closely resembles that of spark gaps, but the operational characteristics are quite different:
By controlling the spacing of the electrodes and the type of gas in the tube, neon lights can be made to conduct without drawing the excessive currents that spark gaps do. They still exhibit hysteresis in that it takes a higher voltage to initiate conduction than it does to make them “extinguish,” and their resistance is definitely nonlinear (the more voltage applied across the tube, the more current, thus more heat, thus lower resistance). Given this nonlinear tendency, the voltage across a neon tube must not be allowed to exceed a certain limit, lest the tube be damaged by excessive temperatures.
This nonlinear tendency gives the neon tube an application other than colorful illumination: it can act somewhat like a zener diode, “clamping” the voltage across it by drawing more and more current if the voltage decreases. When used in this fashion, the tube is known as a glow tube, or voltage-regulator tube, and was a popular means of voltage regulation in the days of electron tube circuit design.
Please take note of the black dot found in the tube symbol shown above (and in the neon lamp symbol shown before that). That marker indicates the tube is gas-filled. It is a common marker used in all gas-filled tube symbols.
One example of a glow tube designed for voltage regulation was the VR-150, with a nominal regulating voltage of 150 volts. Its resistance throughout the allowable limits of current could vary from 5 kΩ to 30 kΩ, a 6:1 span. Like zener diode regulator circuits of today, glow tube regulators could be coupled to amplifying tubes for better voltage regulation and higher load current ranges.
If a regular triode was filled with gas instead of a hard vacuum, it would manifest all the hysteresis and nonlinearity of other gas tubes with one major advantage: the amount of voltage applied between grid and cathode would determine the minimum plate-to cathode voltage necessary to initiate conduction. In essence, this tube was the equivalent of the semiconductor SCR (Silicon-Controlled Rectifier), and was called the thyratron.
It should be noted that the schematic shown above is greatly simplified for most purposes and thyratron tube designs. Some thyratrons, for instance, required that the grid voltage switch polarity between their “on” and “off” states in order to properly work. Also, some thyratrons had more than one grid!
Thyratrons found use in much the same way as SCR’s find use today: controlling rectified AC to large loads such as motors. Thyratron tubes have been manufactured with different types of gas fillings for different characteristics: inert (chemically non-reactive) gas, hydrogen gas, and mercury (vaporized into a gas form when activated). Deuterium, a rare isotope of hydrogen, was used in some special applications requiring the switching of high voltages. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/13%3A_Electron_Tubes/13.09%3A_Ionization_%28gas-filled%29_Tubes.txt |
In addition to performing tasks of amplification and switching, tubes can be designed to serve as display devices.
Perhaps the best-known display tube is the cathode ray tube, or CRT. Originally invented as an instrument to study the behavior of “cathode rays” (electrons) in a vacuum, these tubes developed into instruments useful in detecting voltage, then later as video projection devices with the advent of television. The main difference between CRTs used in oscilloscopes and CRTs used in televisions is that the oscilloscope variety exclusively use electrostatic (plate) deflection, while televisions use electromagnetic (coil) deflection. Plates function much better than coils over a wider range of signal frequencies, which is great for oscilloscopes but irrelevant for televisions, since a television electron beam sweeps vertically and horizontally at fixed frequencies. Electromagnetic deflection coils are much preferred in television CRT construction because they do not have to penetrate the glass envelope of the tube, thus decreasing the production costs and increasing tube reliability.
An interesting “cousin” to the CRT is the Cat-Eye or Magic-Eye indicator tube. Essentially, this tube is a voltage-measuring device with a display resembling a glowing green ring. Electrons emitted by the cathode of this tube impinge on a fluorescent screen, causing the green-colored light to be emitted. The shape of the glow produced by the fluorescent screen varies as the amount of voltage applied to a grid changes:
The width of the shadow is directly determined by the potential difference between the control electrode and the fluorescent screen. The control electrode is a narrow rod placed between the cathode and the fluorescent screen. If that control electrode (rod) is significantly more negative than the fluorescent screen, it will deflect some electrons away from the that area of the screen. The area of the screen “shadowed” by the control electrode will appear darker when there is a significant voltage difference between the two. When the control electrode and fluorescent screen are at equal potential (zero voltage between them), the shadowing effect will be minimal and the screen will be equally illuminated.
The schematic symbol for a “cat-eye” tube looks something like this:
Here is a photograph of a cat-eye tube, showing the circular display region as well as the glass envelope, socket (black, at far end of tube), and some of its internal structure:
Normally, only the end of the tube would protrude from a hole in an instrument panel, so the user could view the circular, fluorescent screen.
In its simplest usage, a “cat-eye” tube could be operated without the use of the amplifier grid. However, in order to make it more sensitive, the amplifier grid is used, and it is used like this:
The cathode, amplifier grid, and plate act as a triode to create large changes in plate-to-cathode voltage for small changes in grid-to-cathode voltage. Because the control electrode is internally connected to the plate, it is electrically common to it and therefore possesses the same amount of voltage with respect to the cathode that the plate does. Thus, the large voltage changes induced on the plate due to small voltage changes on the amplifier grid end up causing large changes in the width of the shadow seen by whoever is viewing the tube.
“Cat-eye” tubes were never accurate enough to be equipped with a graduated scale as is the case with CRT’s and electromechanical meter movements, but they served well as null detectors in bridge circuits, and as signal strength indicators in radio tuning circuits. An unfortunate limitation to the “cat-eye” tube as a null detector was the fact that it was not directly capable of voltage indication in both polarities. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/13%3A_Electron_Tubes/13.10%3A_Display_Tubes.txt |
For extremely high-frequency applications (above 1 GHz), the interelectrode capacitances and transit-time delays of standard electron tube construction become prohibitive. However, there seems to be no end to the creative ways in which tubes may be constructed, and several high-frequency electron tube designs have been made to overcome these challenges.
It was discovered in 1939 that a toroidal cavity made of conductive material called a cavity resonator surrounding an electron beam of oscillating intensity could extract power from the beam without actually intercepting the beam itself. The oscillating electric and magnetic fields associated with the beam “echoed” inside the cavity, in a manner similar to the sounds of traveling automobiles echoing in a roadside canyon, allowing radio-frequency energy to be transferred from the beam to a waveguide or coaxial cable connected to the resonator with a coupling loop. The tube was called an inductive output tube, or IOT:
Two of the researchers instrumental in the initial development of the IOT, a pair of brothers named Sigurd and Russell Varian, added a second cavity resonator for signal input to the inductive output tube. This input resonator acted as a pair of inductive grids to alternately “bunch” and release packets of electrons down the drift space of the tube, so the electron beam would be composed of electrons traveling at different velocities. This “velocity modulation” of the beam translated into the same sort of amplitude variation at the output resonator, where energy was extracted from the beam. The Varian brothers called their invention a klystron.
Another invention of the Varian brothers was the reflex klystron tube. In this tube, electrons emitted from the heated cathode travel through the cavity grids toward the repeller plate, then are repelled and returned back the way they came (hence the name reflex) through the cavity grids. Self-sustaining oscillations would develop in this tube, the frequency of which could be changed by adjusting the repeller voltage. Hence, this tube operated as a voltage-controlled oscillator.
As a voltage-controlled oscillator, reflex klystron tubes served commonly as “local oscillators” for radar equipment and microwave receivers:
Initially developed as low-power devices whose output required further amplification for radio transmitter use, reflex klystron design was refined to the point where the tubes could serve as power devices in their own right. Reflex klystrons have since been superseded by semiconductor devices in the application of local oscillators, but amplification klystrons continue to find use in high-power, high-frequency radio transmitters and in scientific research applications.
One microwave tube performs its task so well and so cost-effectively that it continues to reign supreme in the competitive realm of consumer electronics: the magnetron tube. This device forms the heart of every microwave oven, generating several hundred watts of microwave RF energy used to heat food and beverages, and doing so under the most grueling conditions for a tube: powered on and off at random times and for random durations.
Magnetron tubes are representative of an entirely different kind of tube than the IOT and klystron. Whereas the latter tubes use a linear electron beam, the magnetron directs its electron beam in a circular pattern by means of a strong magnetic field:
Once again, cavity resonators are used as microwave-frequency “tank circuits,” extracting energy from the passing electron beam inductively. Like all microwave-frequency devices using a cavity resonator, at least one of the resonator cavities is tapped with a coupling loop: a loop of wire magnetically coupling the coaxial cable to the resonant structure of the cavity, allowing RF power to be directed out of the tube to a load. In the case of the microwave oven, the output power is directed through a waveguide to the food or drink to be heated, the water molecules within acting as tiny load resistors, dissipating the electrical energy in the form of heat.
The magnet required for magnetron operation is not shown in the diagram. Magnetic flux runs perpendicular to the plane of the circular electron path. In other words, from the view of the tube shown in the diagram, you are looking straight at one of the magnetic poles. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/13%3A_Electron_Tubes/13.11%3A_Microwave_Tubes.txt |
Devoting a whole chapter in a modern electronics text to the design and function of electron tubes may seem a bit strange, seeing as how semiconductor technology has all but obsoleted tubes in almost every application. However, there is merit in exploring tubes not just for historical purposes, but also for those niche applications that necessitate the qualifying phrase “almost every application” in regard to semiconductor supremacy.
In some applications, electron tubes not only continue to see practical use, but perform their respective tasks better than any solid-state device yet invented. In some cases the performance and reliability of electron tube technology is far superior.
In the fields of high-power, high-speed circuit switching, specialized tubes such as hydrogen thyratrons and krytrons are able to switch far larger amounts of current, far faster than any semiconductor device designed to date. The thermal and temporal limits of semiconductor physics place limitations on switching ability that tubes—which do not operate on the same principles—are exempt from.
In high-power microwave transmitter applications, the excellent thermal tolerance of tubes alone secures their dominance over semiconductors. Electron conduction through semiconducting materials is greatly impacted by temperature. Electron conduction through a vacuum is not. As a consequence, the practical thermal limits of semiconductor devices are rather low compared to that of tubes. Being able to operate tubes at far greater temperatures than equivalent semiconductor devices allows tubes to dissipate more thermal energy for a given amount of dissipation area, which makes them smaller and lighter in continuous high power applications.
Another decided advantage of tubes over semiconductor components in high-power applications is their rebuildability. When a large tube fails, it may be disassembled and repaired at far lower cost than the purchase price of a new tube. When a semiconductor component fails, large or small, there is generally no means of repair.
The following photograph shows the front panel of a 1960’s vintage 5 kW AM radio transmitter. One of two “Eimac” brand power tubes can be seen in a recessed area, behind the glass door. According to the station engineer who gave the facility tour, the rebuild cost for such a tube is only \$800: quite inexpensive compared to the cost of a new tube, and still quite reasonable in contrast to the price of a new, comparable semiconductor component!
Tubes, being less complex in their manufacture than semiconductor components, are potentially cheaper to produce as well, although the huge volume of semiconductor device production in the world greatly offsets this theoretical advantage. Semiconductor manufacture is quite complex, involving many dangerous chemical substances and necessitating super-clean assembly environments. Tubes are essentially nothing more than glass and metal, with a vacuum seal. Physical tolerances are “loose” enough to permit hand-assembly of vacuum tubes, and the assembly work need not be done in a “clean room” environment as is necessary for semiconductor manufacture.
One modern area where electron tubes enjoy supremacy over semiconductor components is in the professional and high-end audio amplifier markets, although this is partially due to musical culture. Many professional guitar players, for example, prefer tube amplifiers over transistor amplifiers because of the specific distortion produced by tube circuits. An electric guitar amplifier is designed to produce distortion rather than avoid distortion as is the case with audio-reproduction amplifiers (this is why an electric guitar sounds so much different than an acoustical guitar), and the type of distortion produced by an amplifier is as much a matter of personal taste as it is technical measurement. Since rock music in particular was born with guitarists playing tube-amplifier equipment, there is a significant level of “tube appeal” inherent to the genre itself, and this appeal shows itself in the continuing demand for “tubed” guitar amplifiers among rock guitarists.
As an illustration of the attitude among some guitarists, consider the following quote taken from the technical glossary page of a tube-amplifier website which will remain nameless:
Solid State: A component that has been specifically designed to make a guitar amplifier sound bad. Compared to tubes, these devices can have a very long lifespan, which guarantees that your amplifier will retain its thin, lifeless, and buzzy sound for a long time to come.
In the area of audio reproduction amplifiers (music studio amplifiers and home entertainment amplifiers), it is best for an amplifier to reproduce the musical signal with as little distortion as possible. Paradoxically, in contrast to the guitar amplifier market where distortion is a design goal, high-end audio is another area where tube amplifiers enjoy continuing consumer demand. Though one might suppose the objective, technical requirement of low distortion would eliminate any subjective bias on the part of audiophiles, one would be very wrong. The market for high-end “tubed” amplifier equipment is quite volatile, changing rapidly with trends and fads, driven by highly subjective claims of “magical” sound from audio system reviewers and salespeople. As in the electric guitar world, there is no small measure of cult-like devotion to tube amplifiers among some quarters of the audiophile world. As an example of this irrationality, consider the design of many ultra-high-end amplifiers, with chassis built to display the working tubes openly, even though this physical exposure of the tubes obviously enhances the undesirable effect of microphonics (changes in tube performance as a result of sound waves vibrating the tube structure).
Having said this, though, there is a wealth of technical literature contrasting tubes against semiconductors for audio power amplifier use, especially in the area of distortion analysis. More than a few competent electrical engineers prefer tube amplifier designs over transistors, and are able to produce experimental evidence in support of their choice. The primary difficulty in quantifying audio system performance is the uncertain response of human hearing. All amplifiers distort their input signal to some degree, especially when overloaded, so the question is which type of amplifier design distorts the least. However, since human hearing is very nonlinear, people do not interpret all types of acoustic distortion equally, and so some amplifiers will sound “better” than others even if a quantitative distortion analysis with electronic instruments indicates similar distortion levels. To determine what type of audio amplifier will distort a musical signal “the least,” we must regard the human ear and brain as part of the whole acoustical system. Since no complete model yet exists for human auditory response, objective assessment is difficult at best. However, some research indicates that the characteristic distortion of tube amplifier circuits (especially when overloaded) is less objectionable than distortion produced by transistors.
Tubes also possess the distinct advantage of low “drift” over a wide range of operating conditions. Unlike semiconductor components, whose barrier voltages, β ratios, bulk resistances, and junction capacitances may change substantially with changes in device temperature and/or other operating conditions, the fundamental characteristics of a vacuum tube remain nearly constant over a wide range in operating conditions, because those characteristics are determined primarily by the physical dimensions of the tube’s structural elements (cathode, grid(s), and plate) rather than the interactions of subatomic particles in a crystalline lattice.
This is one of the major reasons solid-state amplifier designers typically engineer their circuits to maximize power-efficiency even when it compromises distortion performance, because a power-inefficient amplifier dissipates a lot of energy in the form of waste heat, and transistor characteristics tend to change substantially with temperature. Temperature-induced “drift” makes it difficult to stabilize “Q” points and other important performance-related measures in an amplifier circuit. Unfortunately, power efficiency and low distortion seem to be mutually exclusive design goals.
For example, class A audio amplifier circuits typically exhibit very low distortion levels, but are very wasteful of power, meaning that it would be difficult to engineer a solid-state class A amplifier of any substantial power rating due to the consequent drift of transistor characteristics. Thus, most solid-state audio amplifier designers choose class B circuit configurations for greater efficiency, even though class B designs are notorious for producing a type of distortion known as crossover distortion. However, with tubes it is easy to design a stable class A audio amplifier circuit because tubes are not as adversely affected by the changes in temperature experienced in a such a power-inefficient circuit configuration.
Tube performance parameters, though, tend to “drift” more than semiconductor devices when measured over long periods of time (years). One major mechanism of tube “aging” appears to be vacuum leaks: when air enters the inside of a vacuum tube, its electrical characteristics become irreversibly altered. This same phenomenon is a major cause of tube mortality, or why tubes typically do not last as long as their respective solid-state counterparts. When tube vacuum is maintained at a high level, though, excellent performance and life is possible. An example of this is a klystron tube (used to produce the high-frequency radio waves used in a radar system) that lasted for 240,000 hours of operation (cited by Robert S. Symons of Litton Electron Devices Division in his informative paper, “Tubes: Still vital after all these years,” printed in the April 1998 issue of IEEE Spectrum magazine).
If nothing else, the tension between audiophiles over tubes versus semiconductors has spurred a remarkable degree of experimentation and technical innovation, serving as an excellent resource for those wishing to educate themselves on amplifier theory. Taking a wider view, the versatility of electron tube technology (different physical configurations, multiple control grids) hints at the potential for circuit designs of far greater variety than is possible using semiconductors. For this and other reasons, electron tubes will never be “obsolete,” but will continue to serve in niche roles, and to foster innovation for those electronics engineers, inventors, and hobbyists who are unwilling to let their minds by stifled by convention. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_III_-_Semiconductors_(Kuphaldt)/13%3A_Electron_Tubes/13.12%3A_Tubes_versus_Semiconductors.txt |
Most students of electricity begin their study with what is known as direct current (DC), which is electricity flowing in a constant direction, and/or possessing a voltage with constant polarity. DC is the kind of electricity made by a battery (with definite positive and negative terminals), or the kind of charge generated by rubbing certain types of materials against each other.
Alternating Current vs Direct Current
As useful and as easy to understand as DC is, it is not the only “kind” of electricity in use. Certain sources of electricity (most notably, rotary electro-mechanical generators) naturally produce voltages alternating in polarity, reversing positive and negative over time. Either as a voltage switching polarity or as a current switching direction back and forth, this “kind” of electricity is known as Alternating Current (AC): Figure below
Direct vs alternating current
Whereas the familiar battery symbol is used as a generic symbol for any DC voltage source, the circle with the wavy line inside is the generic symbol for any AC voltage source.
One might wonder why anyone would bother with such a thing as AC. It is true that in some cases AC holds no practical advantage over DC. In applications where electricity is used to dissipate energy in the form of heat, the polarity or direction of current is irrelevant, so long as there is enough voltage and current to the load to produce the desired heat (power dissipation). However, with AC it is possible to build electric generators, motors, and power distribution systems that are far more efficient than DC, and so we find AC used predominately across the world in high power applications. To explain the details of why this is so, a bit of background knowledge about AC is necessary.
AC Alternators
If a machine is constructed to rotate a magnetic field around a set of stationary wire coils with the turning of a shaft, AC voltage will be produced across the wire coils as that shaft is rotated, in accordance with Faraday’s Law of electromagnetic induction. This is the basic operating principle of an AC generator, also known as an alternator: Figure below
Alternator operation
Notice how the polarity of the voltage across the wire coils reverses as the opposite poles of the rotating magnet pass by. Connected to a load, this reversing voltage polarity will create a reversing current direction in the circuit. The faster the alternator’s shaft is turned, the faster the magnet will spin, resulting in an alternating voltage and current that switches directions more often in a given amount of time.
While DC generators work on the same general principle of electromagnetic induction, their construction is not as simple as their AC counterparts. With a DC generator, the coil of wire is mounted in the shaft where the magnet is on the AC alternator, and electrical connections are made to this spinning coil via stationary carbon “brushes” contacting copper strips on the rotating shaft. All this is necessary to switch the coil’s changing output polarity to the external circuit so the external circuit sees a constant polarity: Figure below
DC generator operation
The generator shown above will produce two pulses of voltage per revolution of the shaft, both pulses in the same direction (polarity). In order for a DC generator to produce constant voltage, rather than brief pulses of voltage once every 1/2 revolution, there are multiple sets of coils making intermittent contact with the brushes. The diagram shown above is a bit more simplified than what you would see in real life.
The problems involved with making and breaking electrical contact with a moving coil should be obvious (sparking and heat), especially if the shaft of the generator is revolving at high speed. If the atmosphere surrounding the machine contains flammable or explosive vapors, the practical problems of spark-producing brush contacts are even greater. An AC generator (alternator) does not require brushes and commutators to work, and so is immune to these problems experienced by DC generators.
AC Motors
The benefits of AC over DC with regard to generator design is also reflected in electric motors. While DC motors require the use of brushes to make electrical contact with moving coils of wire, AC motors do not. In fact, AC and DC motor designs are very similar to their generator counterparts (identical for the sake of this tutorial), the AC motor being dependent upon the reversing magnetic field produced by alternating current through its stationary coils of wire to rotate the rotating magnet around on its shaft, and the DC motor being dependent on the brush contacts making and breaking connections to reverse current through the rotating coil every 1/2 rotation (180 degrees).
Transformers
So we know that AC generators and AC motors tend to be simpler than DC generators and DC motors. This relative simplicity translates into greater reliability and lower cost of manufacture. But what else is AC good for? Surely there must be more to it than design details of generators and motors! Indeed there is. There is an effect of electromagnetism known as mutual induction, whereby two or more coils of wire placed so that the changing magnetic field created by one induces a voltage in the other. If we have two mutually inductive coils and we energize one coil with AC, we will create an AC voltage in the other coil. When used as such, this device is known as a transformer: Figure below
Transformer “transforms” AC voltage and current.
The fundamental significance of a transformer is its ability to step voltage up or down from the powered coil to the unpowered coil. The AC voltage induced in the unpowered (“secondary”) coil is equal to the AC voltage across the powered (“primary”) coil multiplied by the ratio of secondary coil turns to primary coil turns. If the secondary coil is powering a load, the current through the secondary coil is just the opposite: primary coil current multiplied by the ratio of primary to secondary turns. This relationship has a very close mechanical analogy, using torque and speed to represent voltage and current, respectively: Figure below
Speed multiplication gear train steps torque down and speed up. Step-down transformer steps voltage down and current up.
If the winding ratio is reversed so that the primary coil has less turns than the secondary coil, the transformer “steps up” the voltage from the source level to a higher level at the load: Figure below
Speed reduction gear train steps torque up and speed down. Step-up transformer steps voltage up and current down.
The transformer’s ability to step AC voltage up or down with ease gives AC an advantage unmatched by DC in the realm of power distribution in figure below. When transmitting electrical power over long distances, it is far more efficient to do so with stepped-up voltages and stepped-down currents (smaller-diameter wire with less resistive power losses), then step the voltage back down and the current back up for industry, business, or consumer use.
Transformers enable efficient long distance high voltage transmission of electric energy.
Transformer technology has made long-range electric power distribution practical. Without the ability to efficiently step voltage up and down, it would be cost-prohibitive to construct power systems for anything but close-range (within a few miles at most) use.
As useful as transformers are, they only work with AC, not DC. Because the phenomenon of mutual inductance relies on changing magnetic fields, and direct current (DC) can only produce steady magnetic fields, transformers simply will not work with direct current. Of course, direct current may be interrupted (pulsed) through the primary winding of a transformer to create a changing magnetic field (as is done in automotive ignition systems to produce high-voltage spark plug power from a low-voltage DC battery), but pulsed DC is not that different from AC. Perhaps more than any other reason, this is why AC finds such widespread application in power systems.
• REVIEW:
DC stands for “Direct Current,” meaning voltage or current that maintains constant polarity or direction, respectively, over time.
AC stands for “Alternating Current,” meaning voltage or current that changes polarity or direction, respectively, over time.
AC electromechanical generators, known as alternators, are of simpler construction than DC electromechanical generators.
AC and DC motor design follows respective generator design principles very closely.
A transformer is a pair of mutually-inductive coils used to convey AC power from one coil to the other. Often, the number of turns in each coil is set to create a voltage increase or decrease from the powered (primary) coil to the unpowered (secondary) coil.
Secondary voltage = Primary voltage (secondary turns / primary turns)
Secondary current = Primary current (primary turns / secondary turns) | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/01%3A_Basic_AC_Theory/1.01%3A_What_is_Alternating_Current_%28AC%29.txt |
When an alternator produces AC voltage, the voltage switches polarity over time, but does so in a very particular manner. When graphed over time, the “wave” traced by this voltage of alternating polarity from an alternator takes on a distinct shape, known as a sine wave: Figure below
Graph of AC voltage over time (the sine wave).
In the voltage plot from an electromechanical alternator, the change from one polarity to the other is a smooth one, the voltage level changing most rapidly at the zero (“crossover”) point and most slowly at its peak. If we were to graph the trigonometric function of “sine” over a horizontal range of 0 to 360 degrees, we would find the exact same pattern as in Table below.
Trigonometric “sine” function.
The reason why an electromechanical alternator outputs sine-wave AC is due to the physics of its operation. The voltage produced by the stationary coils by the motion of the rotating magnet is proportional to the rate at which the magnetic flux is changing perpendicular to the coils (Faraday’s Law of Electromagnetic Induction). That rate is greatest when the magnet poles are closest to the coils, and least when the magnet poles are furthest away from the coils. Mathematically, the rate of magnetic flux change due to a rotating magnet follows that of a sine function, so the voltage produced by the coils follows that same function.
If we were to follow the changing voltage produced by a coil in an alternator from any point on the sine wave graph to that point when the wave shape begins to repeat itself, we would have marked exactly one cycle of that wave. This is most easily shown by spanning the distance between identical peaks, but may be measured between any corresponding points on the graph. The degree marks on the horizontal axis of the graph represent the domain of the trigonometric sine function, and also the angular position of our simple two-pole alternator shaft as it rotates: Figure below
Alternator voltage as function of shaft position (time).
Since the horizontal axis of this graph can mark the passage of time as well as shaft position in degrees, the dimension marked for one cycle is often measured in a unit of time, most often seconds or fractions of a second. When expressed as a measurement, this is often called the period of a wave. The period of a wave in degrees is always 360, but the amount of time one period occupies depends on the rate voltage oscillates back and forth.
A more popular measure for describing the alternating rate of an AC voltage or current wave than period is the rate of that back-and-forth oscillation. This is called frequency. The modern unit for frequency is the Hertz (abbreviated Hz), which represents the number of wave cycles completed during one second of time. In the United States of America, the standard power-line frequency is 60 Hz, meaning that the AC voltage oscillates at a rate of 60 complete back-and-forth cycles every second. In Europe, where the power system frequency is 50 Hz, the AC voltage only completes 50 cycles every second. A radio station transmitter broadcasting at a frequency of 100 MHz generates an AC voltage oscillating at a rate of 100 million cycles every second.
Prior to the canonization of the Hertz unit, frequency was simply expressed as “cycles per second.” Older meters and electronic equipment often bore frequency units of “CPS” (Cycles Per Second) instead of Hz. Many people believe the change from self-explanatory units like CPS to Hertz constitutes a step backward in clarity. A similar change occurred when the unit of “Celsius” replaced that of “Centigrade” for metric temperature measurement. The name Centigrade was based on a 100-count (“Centi-”) scale (“-grade”) representing the melting and boiling points of H2O, respectively. The name Celsius, on the other hand, gives no hint as to the unit’s origin or meaning.
Period and frequency are mathematical reciprocals of one another. That is to say, if a wave has a period of 10 seconds, its frequency will be 0.1 Hz, or 1/10 of a cycle per second:
An instrument called an oscilloscope, Figure below, is used to display a changing voltage over time on a graphical screen. You may be familiar with the appearance of an ECG or EKG (electrocardiograph) machine, used by physicians to graph the oscillations of a patient’s heart over time. The ECG is a special-purpose oscilloscope expressly designed for medical use. General-purpose oscilloscopes have the ability to display voltage from virtually any voltage source, plotted as a graph with time as the independent variable. The relationship between period and frequency is very useful to know when displaying an AC voltage or current waveform on an oscilloscope screen. By measuring the period of the wave on the horizontal axis of the oscilloscope screen and reciprocating that time value (in seconds), you can determine the frequency in Hertz.
Time period of sinewave is shown on oscilloscope.
Voltage and current are by no means the only physical variables subject to variation over time. Much more common to our everyday experience is sound, which is nothing more than the alternating compression and decompression (pressure waves) of air molecules, interpreted by our ears as a physical sensation. Because alternating current is a wave phenomenon, it shares many of the properties of other wave phenomena, like sound. For this reason, sound (especially structured music) provides an excellent analogy for relating AC concepts.
In musical terms, frequency is equivalent to pitch. Low-pitch notes such as those produced by a tuba or bassoon consist of air molecule vibrations that are relatively slow (low frequency). High-pitch notes such as those produced by a flute or whistle consist of the same type of vibrations in the air, only vibrating at a much faster rate (higher frequency). Figure below is a table showing the actual frequencies for a range of common musical notes.
The frequency in Hertz (Hz) is shown for various musical notes.
Astute observers will notice that all notes on the table bearing the same letter designation are related by a frequency ratio of 2:1. For example, the first frequency shown (designated with the letter “A”) is 220 Hz. The next highest “A” note has a frequency of 440 Hz—exactly twice as many sound wave cycles per second. The same 2:1 ratio holds true for the first A sharp (233.08 Hz) and the next A sharp (466.16 Hz), and for all note pairs found in the table.
Audibly, two notes whose frequencies are exactly double each other sound remarkably similar. This similarity in sound is musically recognized, the shortest span on a musical scale separating such note pairs being called an octave. Following this rule, the next highest “A” note (one octave above 440 Hz) will be 880 Hz, the next lowest “A” (one octave below 220 Hz) will be 110 Hz. A view of a piano keyboard helps to put this scale into perspective: Figure below
An octave is shown on a musical keyboard.
As you can see, one octave is equal to seven white keys’ worth of distance on a piano keyboard. The familiar musical mnemonic (doe-ray-mee-fah-so-lah-tee)—yes, the same pattern immortalized in the whimsical Rodgers and Hammerstein song sung in
The Sound of Music—covers one octave from C to C.
While electromechanical alternators and many other physical phenomena naturally produce sine waves, this is not the only kind of alternating wave in existence. Other “waveforms” of AC are commonly produced within electronic circuitry. Here are but a few sample waveforms and their common designations in figure below
Some common waveshapes (waveforms).
These waveforms are by no means the only kinds of waveforms in existence. They’re simply a few that are common enough to have been given distinct names. Even in circuits that are supposed to manifest “pure” sine, square, triangle, or sawtooth voltage/current waveforms, the real-life result is often a distorted version of the intended waveshape. Some waveforms are so complex that they defy classification as a particular “type” (including waveforms associated with many kinds of musical instruments). Generally speaking, any waveshape bearing close resemblance to a perfect sine wave is termed sinusoidal, anything different being labeled as non-sinusoidal. Being that the waveform of an AC voltage or current is crucial to its impact in a circuit, we need to be aware of the fact that AC waves come in a variety of shapes.
• REVIEW:
AC produced by an electromechanical alternator follows the graphical shape of a sine wave.
One cycle of a wave is one complete evolution of its shape until the point that it is ready to repeat itself.
The period of a wave is the amount of time it takes to complete one cycle.
Frequency is the number of complete cycles that a wave completes in a given amount of time. Usually measured in Hertz (Hz), 1 Hz being equal to one complete wave cycle per second.
Frequency = 1/(period in seconds) | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/01%3A_Basic_AC_Theory/1.02%3A_AC_Waveforms.txt |
So far we know that AC voltage alternates in polarity and AC current alternates in direction. We also know that AC can alternate in a variety of different ways, and by tracing the alternation over time we can plot it as a “waveform.” We can measure the rate of alternation by measuring the time it takes for a wave to evolve before it repeats itself (the “period”), and express this as cycles per unit time, or “frequency.” In music, frequency is the same as pitch, which is the essential property distinguishing one note from another.
However, we encounter a measurement problem if we try to express how large or small an AC quantity is. With DC, where quantities of voltage and current are generally stable, we have little trouble expressing how much voltage or current we have in any part of a circuit. But how do you grant a single measurement of magnitude to something that is constantly changing?
One way to express the intensity, or magnitude (also called the amplitude), of an AC quantity is to measure its peak height on a waveform graph. This is known as the peak or crest value of an AC waveform: Figure below
Peak voltage of a waveform.
Another way is to measure the total height between opposite peaks. This is known as the peak-to-peak (P-P) value of an AC waveform: Figure below
Peak-to-peak voltage of a waveform.
Unfortunately, either one of these expressions of waveform amplitude can be misleading when comparing two different types of waves. For example, a square wave peaking at 10 volts is obviously a greater amount of voltage for a greater amount of time than a triangle wave peaking at 10 volts. The effects of these two AC voltages powering a load would be quite different: Figure below
A square wave produces a greater heating effect than the same peak voltage triangle wave.
One way of expressing the amplitude of different waveshapes in a more equivalent fashion is to mathematically average the values of all the points on a waveform’s graph to a single, aggregate number. This amplitude measure is known simply as the average value of the waveform. If we average all the points on the waveform algebraically (that is, to consider their sign, either positive or negative), the average value for most waveforms is technically zero, because all the positive points cancel out all the negative points over a full cycle: Figure below
The average value of a sinewave is zero.
This, of course, will be true for any waveform having equal-area portions above and below the “zero” line of a plot. However, as a practical measure of a waveform’s aggregate value, “average” is usually defined as the mathematical mean of all the points’ absolute values over a cycle. In other words, we calculate the practical average value of the waveform by considering all points on the wave as positive quantities, as if the waveform looked like this: Figure below
Waveform seen by AC “average responding” meter.
Polarity-insensitive mechanical meter movements (meters designed to respond equally to the positive and negative half-cycles of an alternating voltage or current) register in proportion to the waveform’s (practical) average value, because the inertia of the pointer against the tension of the spring naturally averages the force produced by the varying voltage/current values over time. Conversely, polarity-sensitive meter movements vibrate uselessly if exposed to AC voltage or current, their needles oscillating rapidly about the zero mark, indicating the true (algebraic) average value of zero for a symmetrical waveform. When the “average” value of a waveform is referenced in this text, it will be assumed that the “practical” definition of average is intended unless otherwise specified.
Another method of deriving an aggregate value for waveform amplitude is based on the waveform’s ability to do useful work when applied to a load resistance. Unfortunately, an AC measurement based on work performed by a waveform is not the same as that waveform’s “average” value, because the power dissipated by a given load (work performed per unit time) is not directly proportional to the magnitude of either the voltage or current impressed upon it. Rather, power is proportional to the square of the voltage or current applied to a resistance (P = E2/R, and P = I2R). Although the mathematics of such an amplitude measurement might not be straightforward, the utility of it is.
Consider a bandsaw and a jigsaw, two pieces of modern woodworking equipment. Both types of saws cut with a thin, toothed, motor-powered metal blade to cut wood. But while the bandsaw uses a continuous motion of the blade to cut, the jigsaw uses a back-and-forth motion. The comparison of alternating current (AC) to direct current (DC) may be likened to the comparison of these two saw types: Figure below
Bandsaw-jigsaw analogy of DC vs AC.
The problem of trying to describe the changing quantities of AC voltage or current in a single, aggregate measurement is also present in this saw analogy: how might we express the speed of a jigsaw blade? A bandsaw blade moves with a constant speed, similar to the way DC voltage pushes or DC current moves with a constant magnitude. A jigsaw blade, on the other hand, moves back and forth, its blade speed constantly changing. What is more, the back-and-forth motion of any two jigsaws may not be of the same type, depending on the mechanical design of the saws. One jigsaw might move its blade with a sine-wave motion, while another with a triangle-wave motion. To rate a jigsaw based on its peak blade speed would be quite misleading when comparing one jigsaw to another (or a jigsaw with a bandsaw!). Despite the fact that these different saws move their blades in different manners, they are equal in one respect: they all cut wood, and a quantitative comparison of this common function can serve as a common basis for which to rate blade speed.
Picture a jigsaw and bandsaw side-by-side, equipped with identical blades (same tooth pitch, angle, etc.), equally capable of cutting the same thickness of the same type of wood at the same rate. We might say that the two saws were equivalent or equal in their cutting capacity. Might this comparison be used to assign a “bandsaw equivalent” blade speed to the jigsaw’s back-and-forth blade motion; to relate the wood-cutting effectiveness of one to the other? This is the general idea used to assign a “DC equivalent” measurement to any AC voltage or current: whatever magnitude of DC voltage or current would produce the same amount of heat energy dissipation through an equal resistance:Figure below
An RMS voltage produces the same heating effect as a the same DC voltage
In the two circuits above, we have the same amount of load resistance (2 Ω) dissipating the same amount of power in the form of heat (50 watts), one powered by AC and the other by DC. Because the AC voltage source pictured above is equivalent (in terms of power delivered to a load) to a 10 volt DC battery, we would call this a “10 volt” AC source. More specifically, we would denote its voltage value as being 10 volts RMS. The qualifier “RMS” stands for Root Mean Square, the algorithm used to obtain the DC equivalent value from points on a graph (essentially, the procedure consists of squaring all the positive and negative points on a waveform graph, averaging those squared values, then taking the square root of that average to obtain the final answer). Sometimes the alternative terms equivalent or DC equivalent are used instead of “RMS,” but the quantity and principle are both the same.
RMS amplitude measurement is the best way to relate AC quantities to DC quantities, or other AC quantities of differing waveform shapes, when dealing with measurements of electric power. For other considerations, peak or peak-to-peak measurements may be the best to employ. For instance, when determining the proper size of wire (ampacity) to conduct electric power from a source to a load, RMS current measurement is the best to use, because the principal concern with current is overheating of the wire, which is a function of power dissipation caused by current through the resistance of the wire. However, when rating insulators for service in high-voltage AC applications, peak voltage measurements are the most appropriate, because the principal concern here is insulator “flashover” caused by brief spikes of voltage, irrespective of time.
Peak and peak-to-peak measurements are best performed with an oscilloscope, which can capture the crests of the waveform with a high degree of accuracy due to the fast action of the cathode-ray-tube in response to changes in voltage. For RMS measurements, analog meter movements (D’Arsonval, Weston, iron vane, electrodynamometer) will work so long as they have been calibrated in RMS figures. Because the mechanical inertia and dampening effects of an electromechanical meter movement makes the deflection of the needle naturally proportional to the average value of the AC, not the true RMS value, analog meters must be specifically calibrated (or mis-calibrated, depending on how you look at it) to indicate voltage or current in RMS units. The accuracy of this calibration depends on an assumed waveshape, usually a sine wave.
Electronic meters specifically designed for RMS measurement are best for the task. Some instrument manufacturers have designed ingenious methods for determining the RMS value of any waveform. One such manufacturer produces “True-RMS” meters with a tiny resistive heating element powered by a voltage proportional to that being measured. The heating effect of that resistance element is measured thermally to give a true RMS value with no mathematical calculations whatsoever, just the laws of physics in action in fulfillment of the definition of RMS. The accuracy of this type of RMS measurement is independent of waveshape.
For “pure” waveforms, simple conversion coefficients exist for equating Peak, Peak-to-Peak, Average (practical, not algebraic), and RMS measurements to one another: Figure below
Conversion factors for common waveforms.
In addition to RMS, average, peak (crest), and peak-to-peak measures of an AC waveform, there are ratios expressing the proportionality between some of these fundamental measurements. The crest factor of an AC waveform, for instance, is the ratio of its peak (crest) value divided by its RMS value. The form factor of an AC waveform is the ratio of its RMS value divided by its average value. Square-shaped waveforms always have crest and form factors equal to 1, since the peak is the same as the RMS and average values. Sinusoidal waveforms have an RMS value of 0.707 (the reciprocal of the square root of 2) and a form factor of 1.11 (0.707/0.636). Triangle- and sawtooth-shaped waveforms have RMS values of 0.577 (the reciprocal of square root of 3) and form factors of 1.15 (0.577/0.5).
Bear in mind that the conversion constants shown here for peak, RMS, and average amplitudes of sine waves, square waves, and triangle waves hold true only for pure forms of these waveshapes. The RMS and average values of distorted waveshapes are not related by the same ratios: Figure below
Arbitrary waveforms have no simple conversions.
This is a very important concept to understand when using an analog D’Arsonval meter movement to measure AC voltage or current. An analog D’Arsonval movement, calibrated to indicate sine-wave RMS amplitude, will only be accurate when measuring pure sine waves. If the waveform of the voltage or current being measured is anything but a pure sine wave, the indication given by the meter will not be the true RMS value of the waveform, because the degree of needle deflection in an analog D’Arsonval meter movement is proportional to the average value of the waveform, not the RMS. RMS meter calibration is obtained by “skewing” the span of the meter so that it displays a small multiple of the average value, which will be equal to be the RMS value for a particular waveshape and a particular waveshape only.
Since the sine-wave shape is most common in electrical measurements, it is the waveshape assumed for analog meter calibration, and the small multiple used in the calibration of the meter is 1.1107 (the form factor: 0.707/0.636: the ratio of RMS divided by average for a sinusoidal waveform). Any waveshape other than a pure sine wave will have a different ratio of RMS and average values, and thus a meter calibrated for sine-wave voltage or current will not indicate true RMS when reading a non-sinusoidal wave. Bear in mind that this limitation applies only to simple, analog AC meters not employing “True-RMS” technology.
• REVIEW:
The amplitude of an AC waveform is its height as depicted on a graph over time. An amplitude measurement can take the form of peak, peak-to-peak, average, or RMS quantity.
Peak amplitude is the height of an AC waveform as measured from the zero mark to the highest positive or lowest negative point on a graph. Also known as the crest amplitude of a wave.
Peak-to-peak amplitude is the total height of an AC waveform as measured from maximum positive to maximum negative peaks on a graph. Often abbreviated as “P-P”.
Average amplitude is the mathematical “mean” of all a waveform’s points over the period of one cycle. Technically, the average amplitude of any waveform with equal-area portions above and below the “zero” line on a graph is zero. However, as a practical measure of amplitude, a waveform’s average value is often calculated as the mathematical mean of all the points’ absolute values (taking all the negative values and considering them as positive). For a sine wave, the average value so calculated is approximately 0.637 of its peak value.
“RMS” stands for Root Mean Square, and is a way of expressing an AC quantity of voltage or current in terms functionally equivalent to DC. For example, 10 volts AC RMS is the amount of voltage that would produce the same amount of heat dissipation across a resistor of given value as a 10 volt DC power supply. Also known as the “equivalent” or “DC equivalent” value of an AC voltage or current. For a sine wave, the RMS value is approximately 0.707 of its peak value.
The crest factor of an AC waveform is the ratio of its peak (crest) to its RMS value.
The form factor of an AC waveform is the ratio of its RMS value to its average value.
Analog, electromechanical meter movements respond proportionally to the average value of an AC voltage or current. When RMS indication is desired, the meter’s calibration must be “skewed” accordingly. This means that the accuracy of an electromechanical meter’s RMS indication is dependent on the purity of the waveform: whether it is the exact same waveshape as the waveform used in calibrating. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/01%3A_Basic_AC_Theory/1.03%3A_Measurements_of_AC_Magnitude.txt |
Over the course of the next few chapters, you will learn that AC circuit measurements and calculations can get very complicated due to the complex nature of alternating current in circuits with inductance and capacitance. However, with simple circuits (figure below) involving nothing more than an AC power source and resistance, the same laws and rules of DC apply simply and directly.
AC circuit calculations for resistive circuits are the same as for DC.
Series resistances still add, parallel resistances still diminish, and the Laws of Kirchhoff and Ohm still hold true. Actually, as we will discover later on, these rules and laws always hold true, its just that we have to express the quantities of voltage, current, and opposition to current in more advanced mathematical forms. With purely resistive circuits, however, these complexities of AC are of no practical consequence, and so we can treat the numbers as though we were dealing with simple DC quantities.
Because all these mathematical relationships still hold true, we can make use of our familiar “table” method of organizing circuit values just as with DC:
One major caveat needs to be given here: all measurements of AC voltage and current must be expressed in the same terms (peak, peak-to-peak, average, or RMS). If the source voltage is given in peak AC volts, then all currents and voltages subsequently calculated are cast in terms of peak units. If the source voltage is given in AC RMS volts, then all calculated currents and voltages are cast in AC RMS units as well. This holds true for any calculation based on Ohm’s Laws, Kirchhoff’s Laws, etc. Unless otherwise stated, all values of voltage and current in AC circuits are generally assumed to be RMS rather than peak, average, or peak-to-peak. In some areas of electronics, peak measurements are assumed, but in most applications (especially industrial electronics) the assumption is RMS.
• REVIEW:
All the old rules and laws of DC (Kirchhoff’s Voltage and Current Laws, Ohm’s Law) still hold true for AC. However, with more complex circuits, we may need to represent the AC quantities in more complex form. More on this later, I promise!
The “table” method of organizing circuit values is still a valid analysis tool for AC circuits.
1.05: AC Phase
Things start to get complicated when we need to relate two or more AC voltages or currents that are out of step with each other. By “out of step,” I mean that the two waveforms are not synchronized: that their peaks and zero points do not match up at the same points in time. The graph in figure below illustrates an example of this.
Out of phase waveforms
The two waves shown above (A versus B) are of the same amplitude and frequency, but they are out of step with each other. In technical terms, this is called a phase shift. Earlier we saw how we could plot a “sine wave” by calculating the trigonometric sine function for angles ranging from 0 to 360 degrees, a full circle. The starting point of a sine wave was zero amplitude at zero degrees, progressing to full positive amplitude at 90 degrees, zero at 180 degrees, full negative at 270 degrees, and back to the starting point of zero at 360 degrees. We can use this angle scale along the horizontal axis of our waveform plot to express just how far out of step one wave is with another: Figure below
Wave A leads wave B by 45o
The shift between these two waveforms is about 45 degrees, the “A” wave being ahead of the “B” wave. A sampling of different phase shifts is given in the following graphs to better illustrate this concept: Figure below
Examples of phase shifts.
Because the waveforms in the above examples are at the same frequency, they will be out of step by the same angular amount at every point in time. For this reason, we can express phase shift for two or more waveforms of the same frequency as a constant quantity for the entire wave, and not just an expression of shift between any two particular points along the waves. That is, it is safe to say something like, “voltage ‘A’ is 45 degrees out of phase with voltage ‘B’.” Whichever waveform is ahead in its evolution is said to be leading and the one behind is said to be lagging.
Phase shift, like voltage, is always a measurement relative between two things. There’s really no such thing as a waveform with an absolute phase measurement because there’s no known universal reference for phase. Typically in the analysis of AC circuits, the voltage waveform of the power supply is used as a reference for phase, that voltage stated as “xxx volts at 0 degrees.” Any other AC voltage or current in that circuit will have its phase shift expressed in terms relative to that source voltage.
This is what makes AC circuit calculations more complicated than DC. When applying Ohm’s Law and Kirchhoff’s Laws, quantities of AC voltage and current must reflect phase shift as well as amplitude. Mathematical operations of addition, subtraction, multiplication, and division must operate on these quantities of phase shift as well as amplitude. Fortunately, there is a mathematical system of quantities called complex numbers ideally suited for this task of representing amplitude and phase.
Because the subject of complex numbers is so essential to the understanding of AC circuits, the next chapter will be devoted to that subject alone.
• REVIEW:
Phase shift is where two or more waveforms are out of step with each other.
The amount of phase shift between two waves can be expressed in terms of degrees, as defined by the degree units on the horizontal axis of the waveform graph used in plotting the trigonometric sine function.
A leading waveform is defined as one waveform that is ahead of another in its evolution. A lagging waveform is one that is behind another. Example:
Calculations for AC circuit analysis must take into consideration both amplitude and phase shift of voltage and current waveforms to be completely accurate. This requires the use of a mathematical system called complex numbers. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/01%3A_Basic_AC_Theory/1.04%3A_Simple_AC_Circuit_Calculations.txt |
One of the more fascinating applications of electricity is in the generation of invisible ripples of energy called radio waves. The limited scope of this lesson on alternating current does not permit full exploration of the concept, some of the basic principles will be covered.
With Oersted’s accidental discovery of electromagnetism, it was realized that electricity and magnetism were related to each other. When an electric current was passed through a conductor, a magnetic field was generated perpendicular to the axis of flow. Likewise, if a conductor was exposed to a change in magnetic flux perpendicular to the conductor, a voltage was produced along the length of that conductor. So far, scientists knew that electricity and magnetism always seemed to affect each other at right angles. However, a major discovery lay hidden just beneath this seemingly simple concept of related perpendicularity, and its unveiling was one of the pivotal moments in modern science.
This breakthrough in physics is hard to overstate. The man responsible for this conceptual revolution was the Scottish physicist James Clerk Maxwell (1831-1879), who “unified” the study of electricity and magnetism in four relatively tidy equations. In essence, what he discovered was that electric and magnetic fields were intrinsically related to one another, with or without the presence of a conductive path for electrons to flow. Stated more formally, Maxwell’s discovery was this:
A changing electric field produces a perpendicular magnetic field, and A changing magnetic field produces a perpendicular electric field.
All of this can take place in open space, the alternating electric and magnetic fields supporting each other as they travel through space at the speed of light. This dynamic structure of electric and magnetic fields propagating through space is better known as an electromagnetic wave.
There are many kinds of natural radiative energy composed of electromagnetic waves. Even light is electromagnetic in nature. So are X-rays and “gamma” ray radiation. The only difference between these kinds of electromagnetic radiation is the frequency of their oscillation (alternation of the electric and magnetic fields back and forth in polarity). By using a source of AC voltage and a special device called an antenna, we can create electromagnetic waves (of a much lower frequency than that of light) with ease.
An antenna is nothing more than a device built to produce a dispersing electric or magnetic field. Two fundamental types of antennae are the dipole and the loop: Figure below
Dipole and loop antennae
While the dipole looks like nothing more than an open circuit, and the loop a short circuit, these pieces of wire are effective radiators of electromagnetic fields when connected to AC sources of the proper frequency. The two open wires of the dipole act as a sort of capacitor (two conductors separated by a dielectric), with the electric field open to dispersal instead of being concentrated between two closely-spaced plates. The closed wire path of the loop antenna acts like an inductor with a large air core, again providing ample opportunity for the field to disperse away from the antenna instead of being concentrated and contained as in a normal inductor.
As the powered dipole radiates its changing electric field into space, a changing magnetic field is produced at right angles, thus sustaining the electric field further into space, and so on as the wave propagates at the speed of light. As the powered loop antenna radiates its changing magnetic field into space, a changing electric field is produced at right angles, with the same end-result of a continuous electromagnetic wave sent away from the antenna. Either antenna achieves the same basic task: the controlled production of an electromagnetic field.
When attached to a source of high-frequency AC power, an antenna acts as a transmitting device, converting AC voltage and current into electromagnetic wave energy. Antennas also have the ability to intercept electromagnetic waves and convert their energy into AC voltage and current. In this mode, an antenna acts as a receiving device: Figure below
Basic radio transmitter and receiver
While there is much more that may be said about antenna technology, this brief introduction is enough to give you the general idea of what’s going on (and perhaps enough information to provoke a few experiments).
• REVIEW
James Maxwell discovered that changing electric fields produce perpendicular magnetic fields, and vice versa, even in empty space.
A twin set of electric and magnetic fields, oscillating at right angles to each other and traveling at the speed of light, constitutes an electromagnetic wave.
An antenna is a device made of wire, designed to radiate a changing electric field or changing magnetic field when powered by a high-frequency AC source, or intercept an electromagnetic field and convert it to an AC voltage or current.
The dipole antenna consists of two pieces of wire (not touching), primarily generating an electric field when energized, and secondarily producing a magnetic field in space.
The loop antenna consists of a loop of wire, primarily generating a magnetic field when energized, and secondarily producing an electric field in space. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/01%3A_Basic_AC_Theory/1.06%3A_Principles_of_Radio.txt |
To successfully analyze AC circuits, we need to work with mathematical objects and techniques capable of representing these multi-dimensional quantities. Here is where we need to abandon scalar numbers for something better suited: complex numbers. Just like the example of giving directions from one city to another, AC quantities in a single-frequency circuit have both amplitude (analogy: distance) and phase shift (analogy: direction). A complex number is a single mathematical quantity able to express these two dimensions of amplitude and phase shift at once.
• 2.1: Introduction to Complex Numbers
When analyzing alternating current circuits, we find that quantities of voltage, current, and even resistance (called impedance in AC) are not the familiar one-dimensional quantities we’re used to measuring in DC circuits. Rather, these quantities, because they’re dynamic (alternating in direction and amplitude), possess other dimensions that must be taken into account. Frequency and phase shift are two of these dimensions that come into play.
• 2.2: Vectors and AC Waveforms
When used to describe an AC quantity, the length of a vector represents the amplitude of the wave while the angle of a vector represents the phase angle of the wave relative to some other (reference) waveform.
• 2.3: Simple Vector Addition
Remember that vectors are mathematical objects just like numbers on a number line: they can be added, subtracted, multiplied, and divided. Addition is perhaps the easiest vector operation to visualize, so we’ll begin with that. If vectors with common angles are added, their magnitudes (lengths) add up just like regular scalar quantities.
• 2.4: Complex Vector Addition
If vectors with uncommon angles are added, their magnitudes (lengths) add up quite differently than that of scalar magnitudes.
• 2.5: Polar Form and Rectangular Form Notation for Complex Numbers
• 2.6: Complex Number Arithmetic
• 2.7: More on AC “polarity”
• 2.8: Some Examples with AC Circuits
02: Complex Numbers
If I needed to describe the distance between two cities, I could provide an answer consisting of a single number in miles, kilometers, or some other unit of linear measurement. However, if I were to describe how to travel from one city to another, I would have to provide more information than just the distance between those two cities; I would also have to provide information about the direction to travel, as well.
The kind of information that expresses a single dimension, such as linear distance, is called a scalar quantity in mathematics. Scalar numbers are the kind of numbers you’ve used in most all of your mathematical applications so far. The voltage produced by a battery, for example, is a scalar quantity. So is the resistance of a piece of wire (ohms), or the current through it (amps).
However, when we begin to analyze alternating current circuits, we find that quantities of voltage, current, and even resistance (called impedance in AC) are not the familiar one-dimensional quantities we’re used to measuring in DC circuits. Rather, these quantities, because they’re dynamic (alternating in direction and amplitude), possess other dimensions that must be taken into account. Frequency and phase shift are two of these dimensions that come into play. Even with relatively simple AC circuits, where we’re only dealing with a single frequency, we still have the dimension of phase shift to contend with in addition to the amplitude.
In order to successfully analyze AC circuits, we need to work with mathematical objects and techniques capable of representing these multi-dimensional quantities. Here is where we need to abandon scalar numbers for something better suited: complex numbers. Just like the example of giving directions from one city to another, AC quantities in a single-frequency circuit have both amplitude (analogy: distance) and phase shift (analogy: direction). A complex number is a single mathematical quantity able to express these two dimensions of amplitude and phase shift at once.
Complex numbers are easier to grasp when they’re represented graphically. If I draw a line with a certain length (magnitude) and angle (direction), I have a graphic representation of a complex number which is commonly known in physics as a vector: (Figure below)
A vector has both magnitude and direction.
Like distances and directions on a map, there must be some common frame of reference for angle figures to have any meaning. In this case, directly right is considered to be 0o, and angles are counted in a positive direction going counter-clockwise: (Figure below)
The vector compass
The idea of representing a number in graphical form is nothing new. We all learned this in grade school with the “number line:” (Figure below)
Number line.
We even learned how addition and subtraction works by seeing how lengths (magnitudes) stacked up to give a final answer: (Figure below)
Addition on a “number line”.
Later, we learned that there were ways to designate the values between the whole numbers marked on the line. These were fractional or decimal quantities: (Figure below)
Locating a fraction on the “number line”
Later yet we learned that the number line could extend to the left of zero as well: (Figure below)
“Number line” shows both positive and negative numbers.
These fields of numbers (whole, integer, rational, irrational, real, etc.) learned in grade school share a common trait: they’re all one-dimensional. The straightness of the number line illustrates this graphically. You can move up or down the number line, but all “motion” along that line is restricted to a single axis (horizontal). One-dimensional, scalar numbers are perfectly adequate for counting beads, representing weight, or measuring DC battery voltage, but they fall short of being able to represent something more complex like the distance and direction between two cities, or the amplitude and phase of an AC waveform. To represent these kinds of quantities, we need multidimensional representations. In other words, we need a number line that can point in different directions, and that’s exactly what a vector is.
• REVIEW:
A scalar number is the type of mathematical object that people are used to using in everyday life: a one-dimensional quantity like temperature, length, weight, etc.
A complex number is a mathematical quantity representing two dimensions of magnitude and direction.
A vector is a graphical representation of a complex number. It looks like an arrow, with a starting point, a tip, a definite length, and a definite direction. Sometimes the word phasor is used in electrical applications where the angle of the vector represents phase shift between waveforms. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/02%3A_Complex_Numbers/2.01%3A_Introduction_to_Complex_Numbers.txt |
OK, so how exactly can we represent AC quantities of voltage or current in the form of a vector? The length of the vector represents the magnitude (or amplitude) of the waveform, like this: (Figure below)
Vector length represents AC voltage magnitude.
The greater the amplitude of the waveform, the greater the length of its corresponding vector. The angle of the vector, however, represents the phase shift in degrees between the waveform in question and another waveform acting as a “reference” in time. Usually, when the phase of a waveform in a circuit is expressed, it is referenced to the power supply voltage waveform (arbitrarily stated to be “at” 0o). Remember that phase is always a relative measurement between two waveforms rather than an absolute property. (Figure below) (Figure below)
Vector angle is the phase with respect to another waveform.
Phase shift between waves and vector phase angle
The greater the phase shift in degrees between two waveforms, the greater the angle difference between the corresponding vectors. Being a relative measurement, like voltage, phase shift (vector angle) only has meaning in reference to some standard waveform. Generally this “reference” waveform is the main AC power supply voltage in the circuit. If there is more than one AC voltage source, then one of those sources is arbitrarily chosen to be the phase reference for all other measurements in the circuit.
This concept of a reference point is not unlike that of the “ground” point in a circuit for the benefit of voltage reference. With a clearly defined point in the circuit declared to be “ground,” it becomes possible to talk about voltage “on” or “at” single points in a circuit, being understood that those voltages (always relative between two points) are referenced to “ground.” Correspondingly, with a clearly defined point of reference for phase it becomes possible to speak of voltages and currents in an AC circuit having definite phase angles. For example, if the current in an AC circuit is described as “24.3 milliamps at -64 degrees,” it means that the current waveform has an amplitude of 24.3 mA, and it lags 64o behind the reference waveform, usually assumed to be the main source voltage waveform.
• REVIEW:
When used to describe an AC quantity, the length of a vector represents the amplitude of the wave while the angle of a vector represents the phase angle of the wave relative to some other (reference) waveform.
2.03: Simple Vector Addition
Remember that vectors are mathematical objects just like numbers on a number line: they can be added, subtracted, multiplied, and divided. Addition is perhaps the easiest vector operation to visualize, so we’ll begin with that. If vectors with common angles are added, their magnitudes (lengths) add up just like regular scalar quantities: (Figure below)
Vector magnitudes add like scalars for a common angle. Similarly, if AC voltage sources with the same phase angle are connected together in series, their voltages add just as you might expect with DC batteries: (Figure below)
“In phase” AC voltages add like DC battery voltages.
Please note the (+) and (-) polarity marks next to the leads of the two AC sources. Even though we know AC doesn’t have “polarity” in the same sense that DC does, these marks are essential to knowing how to reference the given phase angles of the voltages. This will become more apparent in the next example.
If vectors directly opposing each other (180o out of phase) are added together, their magnitudes (lengths) subtract just like positive and negative scalar quantities subtract when added: (Figure below)
Directly opposing vector magnitudes subtract.
Similarly, if opposing AC voltage sources are connected in series, their voltages subtract as you might expect with DC batteries connected in an opposing fashion: (Figure below)
Opposing AC voltages subtract like opposing battery voltages.
Determining whether or not these voltage sources are opposing each other requires an examination of their polarity markings and their phase angles. Notice how the polarity markings in the above diagram seem to indicate additive voltages (from left to right, we see - and + on the 6 volt source, - and + on the 8 volt source). Even though these polarity markings would normally indicate an additive effect in a DC circuit (the two voltages working together to produce a greater total voltage), in this AC circuit they’re actually pushing in opposite directions because one of those voltages has a phase angle of 0o and the other a phase angle of 180o. The result, of course, is a total voltage of 2 volts.
We could have just as well shown the opposing voltages subtracting in series like this: (Figure below)
Opposing voltages in spite of equal phase angles.
Note how the polarities appear to be opposed to each other now, due to the reversal of wire connections on the 8 volt source. Since both sources are described as having equal phase angles (0o), they truly are opposed to one another, and the overall effect is the same as the former scenario with “additive” polarities and differing phase angles: a total voltage of only 2 volts. (Figure below)
Just as there are two ways to express the phase of the sources, there are two ways to express the resultant their sum.
The resultant voltage can be expressed in two different ways: 2 volts at 180o with the (-) symbol on the left and the (+) symbol on the right, or 2 volts at 0o with the (+) symbol on the left and the (-) symbol on the right. A reversal of wires from an AC voltage source is the same as phase-shifting that source by 180o. (Figure below)
Example of equivalent voltage sources.
2.04: Complex Vector Addition
If vectors with uncommon angles are added, their magnitudes (lengths) add up quite differently than that of scalar magnitudes: (Figure below)
Vector magnitudes do not directly add for unequal angles.
If two AC voltages—90o out of phase—are added together by being connected in series, their voltage magnitudes do not directly add or subtract as with scalar voltages in DC. Instead, these voltage quantities are complex quantities, and just like the above vectors, which add up in a trigonometric fashion, a 6 volt source at 0o added to an 8 volt source at 90o results in 10 volts at a phase angle of 53.13o: (Figure below)
The 6V and 8V sources add to 10V with the help of trigonometry.
Compared to DC circuit analysis, this is very strange indeed. Note that it is possible to obtain voltmeter indications of 6 and 8 volts, respectively, across the two AC voltage sources, yet only read 10 volts for a total voltage!
There is no suitable DC analogy for what we’re seeing here with two AC voltages slightly out of phase. DC voltages can only directly aid or directly oppose, with nothing in between. With AC, two voltages can be aiding or opposing one another to any degree between fully-aiding and fully-opposing, inclusive. Without the use of vector (complex number) notation to describe AC quantities, it would be very difficult to perform mathematical calculations for AC circuit analysis.
In the next section, we’ll learn how to represent vector quantities in symbolic rather than graphical form. Vector and triangle diagrams suffice to illustrate the general concept, but more precise methods of symbolism must be used if any serious calculations are to be performed on these quantities.
Review
DC voltages can only either directly aid or directly oppose each other when connected in series. AC voltages may aid or oppose to any degree depending on the phase shift between them. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/02%3A_Complex_Numbers/2.02%3A_Vectors_and_AC_Waveforms.txt |
In order to work with complex numbers without drawing vectors, we first need some kind of standard mathematical notation. There are two basic forms of complex number notation: polar and rectangular.
Polar Form of a Complex Number
Polar form is where a complex number is denoted by the length (otherwise known as the magnitude, absolute value, or modulus) and the angle of its vector (usually denoted by an angle symbol that looks like this: ∠).
To use the map analogy, polar notation for the vector from New York City to San Diego would be something like “2400 miles, southwest.” Here are two examples of vectors and their polar notations:
Vectors with polar notations.
Standard orientation for vector angles in AC circuit calculations defines 0o as being to the right (horizontal), making 90o straight up, 180o to the left, and 270o straight down. Please note that vectors angled “down” can have angles represented in polar form as positive numbers in excess of 180, or negative numbers less than 180. For example, a vector angled ∠ 270o (straight down) can also be said to have an angle of -90o. (Figure below) The above vector on the right (7.81 ∠ 230.19o) can also be denoted as 7.81 ∠-129.81o.
The vector compass
Rectangular Form of a Complex Number
Rectangular form, on the other hand, is where a complex number is denoted by its respective horizontal and vertical components. In essence, the angled vector is taken to be the hypotenuse of a right triangle, described by the lengths of the adjacent and opposite sides. Rather than describing a vector’s length and direction by denoting magnitude and angle, it is described in terms of “how far left/right” and “how far up/down.”
These two-dimensional figures (horizontal and vertical) are symbolized by two numerical figures. In order to distinguish the horizontal and vertical dimensions from each other, the vertical is prefixed with a lower-case “i” (in pure mathematics) or “j” (in electronics). These lower-case letters do not represent a physical variable (such as instantaneous current, also symbolized by a lower-case letter “i”), but rather are mathematical operators used to distinguish the vector’s vertical component from its horizontal component. As a complete complex number, the horizontal and vertical quantities are written as a sum: (Figure below)
In “rectangular” form the vector’s length and direction are denoted in terms of its horizontal and vertical span, the first number representing the the horizontal (“real”) and the second number (with the “j” prefix) representing the vertical (“imaginary”) dimensions.
The horizontal component is referred to as the real component, since that dimension is compatible with normal, scalar (“real”) numbers. The vertical component is referred to as the imaginary component, since that dimension lies in a different direction, totally alien to the scale of the real numbers. (Figure below)
Vector compass showing real and imaginary axes
The “real” axis of the graph corresponds to the familiar number line we saw earlier: the one with both positive and negative values on it. The “imaginary” axis of the graph corresponds to another number line situated at 90o to the “real” one. Vectors being two-dimensional things, we must have a two-dimensional “map” upon which to express them, thus the two number lines perpendicular to each other: (Figure below)
Vector compass with real and imaginary (“j”) number lines.
Converting from Polar Form to Rectangular Form
Either method of notation is valid for complex numbers. The primary reason for having two methods of notation is for ease of longhand calculation, rectangular form lending itself to addition and subtraction, and polar form lending itself to multiplication and division. Conversion between the two notational forms involves simple trigonometry. To convert from polar to rectangular, find the real component by multiplying the polar magnitude by the cosine of the angle, and the imaginary component by multiplying the polar magnitude by the sine of the angle. This may be understood more readily by drawing the quantities as sides of a right triangle, the hypotenuse of the triangle representing the vector itself (its length and angle with respect to the horizontal constituting the polar form), the horizontal and vertical sides representing the “real” and “imaginary” rectangular components, respectively: (Figure below)
Magnitude vector in terms of real (4) and imaginary (j3) components.
Converting from Rectangular Form to Polar Form
To convert from rectangular to polar, find the polar magnitude through the use of the Pythagorean Theorem (the polar magnitude is the hypotenuse of a right triangle, and the real and imaginary components are the adjacent and opposite sides, respectively), and the angle by taking the arctangent of the imaginary component divided by the real component:
Review
• Polar notation denotes a complex number in terms of its vector’s length and angular direction from the starting point. Example: fly 45 miles ∠ 203o (West by Southwest).
• Rectangular notation denotes a complex number in terms of its horizontal and vertical dimensions. Example: drive 41 miles West, then turn and drive 18 miles South.
• In rectangular notation, the first quantity is the “real” component (horizontal dimension of vector) and the second quantity is the “imaginary” component (vertical dimension of vector). The imaginary component is preceded by a lower-case “j,” sometimes called the j operator.
• Both polar and rectangular forms of notation for a complex number can be related graphically in the form of a right triangle, with the hypotenuse representing the vector itself (polar form: hypotenuse length = magnitude; angle with respect to horizontal side = angle), the horizontal side representing the rectangular “real” component, and the vertical side representing the rectangular “imaginary” component.
2.06: Complex Number Arithmetic
Since complex numbers are legitimate mathematical entities, just like scalar numbers, they can be added, subtracted, multiplied, divided, squared, inverted, and such, just like any other kind of number. Some scientific calculators are programmed to directly perform these operations on two or more complex numbers, but these operations can also be done “by hand.” This section will show you how the basic operations are performed. It is highly recommended that you equip yourself with a scientific calculator capable of performing arithmetic functions easily on complex numbers. It will make your study of AC circuit much more pleasant than if you’re forced to do all calculations the longer way.
Addition and subtraction with complex numbers in rectangular form is easy. For addition, simply add up the real components of the complex numbers to determine the real component of the sum, and add up the imaginary components of the complex numbers to determine the imaginary component of the sum:
When subtracting complex numbers in rectangular form, simply subtract the real component of the second complex number from the real component of the first to arrive at the real component of the difference, and subtract the imaginary component of the second complex number from the imaginary component of the first to arrive the imaginary component of the difference:
For longhand multiplication and division, polar is the favored notation to work with. When multiplying complex numbers in polar form, simply multiply the polar magnitudes of the complex numbers to determine the polar magnitude of the product, and add the angles of the complex numbers to determine the angle of the product:
To obtain the reciprocal, or “invert” (1/x), a complex number, simply divide the number (in polar form) into a scalar value of 1, which is nothing more than a complex number with no imaginary component (angle = 0):
These are the basic operations you will need to know in order to manipulate complex numbers in the analysis of AC circuits. Operations with complex numbers are by no means limited just to addition, subtraction, multiplication, division, and inversion, however. Virtually any arithmetic operation that can be done with scalar numbers can be done with complex numbers, including powers, roots, solving simultaneous equations with complex coefficients, and even trigonometric functions (although this involves a whole new perspective in trigonometry called hyperbolic functions which is well beyond the scope of this discussion). Be sure that you’re familiar with the basic arithmetic operations of addition, subtraction, multiplication, division, and inversion, and you’ll have little trouble with AC circuit analysis.
Review
• To add complex numbers in rectangular form, add the real components and add the imaginary components. Subtraction is similar.
• To multiply complex numbers in polar form, multiply the magnitudes and add the angles. To divide, divide the magnitudes and subtract one angle from the other. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/02%3A_Complex_Numbers/2.05%3A_Polar_Form_and_Rectangular_Form_Notation_for_Complex_Numbers.txt |
Complex numbers are useful for AC circuit analysis because they provide a convenient method of symbolically denoting phase shift between AC quantities like voltage and current. However, for most people the equivalence between abstract vectors and real circuit quantities is not an easy one to grasp. Earlier in this chapter, we saw how AC voltage sources are given voltage figures in complex form (magnitude and phase angle), as well as polarity markings. Being that alternating current has no set “polarity” as direct current does, these polarity markings and their relationship to phase angle tends to be confusing. This section is written in the attempt to clarify some of these issues.
Voltage is an inherently relative quantity. When we measure a voltage, we have a choice in how we connect a voltmeter or other voltage-measuring instrument to the source of voltage, as there are two points between which the voltage exists, and two test leads on the instrument with which to make connection. In DC circuits, we denote the polarity of voltage sources and voltage drops explicitly, using “+” and “-” symbols, and use color-coded meter test leads (red and black). If a digital voltmeter indicates a negative DC voltage, we know that its test leads are connected “backward” to the voltage (red lead connected to the “-” and black lead to the “+”).
Batteries have their polarity designated by way of intrinsic symbology: the short-line side of a battery is always the negative (-) side and the long-line side always the positive(+): (Figure below)
Conventional battery polarity.
Although it would be mathematically correct to represent a battery’s voltage as a negative figure with reversed polarity markings, it would be decidedly unconventional: (Figure below)
Decidedly unconventional polarity marking.
Interpreting such notation might be easier if the “+” and “-” polarity markings were viewed as reference points for voltmeter test leads, the “+” meaning “red” and the “-” meaning “black.” A voltmeter connected to the above battery with red lead to the bottom terminal and black lead to the top terminal would indeed indicate a negative voltage (-6 volts). Actually, this form of notation and interpretation is not as unusual as you might think: it is commonly encountered in problems of DC network analysis where “+” and “-” polarity marks are initially drawn according to educated guess, and later interpreted as correct or “backward” according to the mathematical sign of the figure calculated.
In AC circuits, though, we don’t deal with “negative” quantities of voltage. Instead, we describe to what degree one voltage aids or opposes another by phase: the time-shift between two waveforms. We never describe an AC voltage as being negative in sign, because the facility of polar notation allows for vectors pointing in an opposite direction. If one AC voltage directly opposes another AC voltage, we simply say that one is 180o out of phase with the other.
Still, voltage is relative between two points, and we have a choice in how we might connect a voltage-measuring instrument between those two points. The mathematical sign of a DC voltmeter’s reading has meaning only in the context of its test lead connections: which terminal the red lead is touching, and which terminal the black lead is touching. Likewise, the phase angle of an AC voltage has meaning only in the context of knowing which of the two points is considered the “reference” point. Because of this fact, “+” and “-” polarity marks are often placed by the terminals of an AC voltage in schematic diagrams to give the stated phase angle a frame of reference.
Let’s review these principles with some graphical aids. First, the principle of relating test lead connections to the mathematical sign of a DC voltmeter indication: (Figure below)
Test lead colors provide a frame of reference for interpreting the sign (+ or -) of the meter’s indication.
The mathematical sign of a digital DC voltmeter’s display has meaning only in the context of its test lead connections. Consider the use of a DC voltmeter in determining whether or not two DC voltage sources are aiding or opposing each other, assuming that both sources are unlabeled as to their polarities. Using the voltmeter to measure across the first source: (Figure below)
(+) Reading indicates black is (-), red is (+).
This first measurement of +24 across the left-hand voltage source tells us that the black lead of the meter really is touching the negative side of voltage source #1, and the red lead of the meter really is touching the positive. Thus, we know source #1 is a battery facing in this orientation: (Figure below)
24V source is polarized (-) to (+).
Measuring the other unknown voltage source: (Figure below)
(-) Reading indicates black is (+), red is (-).
This second voltmeter reading, however, is a negative (-) 17 volts, which tells us that the black test lead is actually touching the positive side of voltage source #2, while the red test lead is actually touching the negative. Thus, we know that source #2 is a battery facing in the opposite direction: (Figure below)
17V source is polarized (+) to (-)
It should be obvious to any experienced student of DC electricity that these two batteries are opposing one another. By definition, opposing voltages subtract from one another, so we subtract 17 volts from 24 volts to obtain the total voltage across the two: 7 volts.
We could, however, draw the two sources as nondescript boxes, labeled with the exact voltage figures obtained by the voltmeter, the polarity marks indicating voltmeter test lead placement: (Figure below)
Voltmeter readings as read from meters.
According to this diagram, the polarity marks (which indicate meter test lead placement) indicate the sources aiding each other. By definition, aiding voltage sources add with one another to form the total voltage, so we add 24 volts to -17 volts to obtain 7 volts: still the correct answer. If we let the polarity markings guide our decision to either add or subtract voltage figures—whether those polarity markings represent the true polarity or just the meter test lead orientation—and include the mathematical signs of those voltage figures in our calculations, the result will always be correct. Again, the polarity markings serve as frames of reference to place the voltage figures’ mathematical signs in proper context.
The same is true for AC voltages, except that phase angle substitutes for mathematical sign. In order to relate multiple AC voltages at different phase angles to each other, we need polarity markings to provide frames of reference for those voltages’ phase angles. (Figure below)
Take for example the following circuit:
Phase angle substitutes for ± sign. The polarity markings show these two voltage sources aiding each other, so to determine the total voltage across the resistor we must add the voltage figures of 10 V ∠ 0o and 6 V ∠ 45o together to obtain 14.861 V ∠ 16.59o. However, it would be perfectly acceptable to represent the 6 volt source as 6 V ∠ 225o, with a reversed set of polarity markings, and still arrive at the same total voltage: (Figure below)
Reversing the voltmeter leads on the 6V source changes the phase angle by 180o.
6 V ∠ 45o with negative on the left and positive on the right is exactly the same as 6 V ∠ 225o with positive on the left and negative on the right: the reversal of polarity markings perfectly complements the addition of 180o to the phase angle designation: (Figure below)
Reversing polarity adds 180o to phase angle
Unlike DC voltage sources, whose symbols intrinsically define polarity by means of short and long lines, AC voltage symbols have no intrinsic polarity marking. Therefore, any polarity marks must be included as additional symbols on the diagram, and there is no one “correct” way in which to place them. They must, however, correlate with the given phase angle to represent the true phase relationship of that voltage with other voltages in the circuit.
REVIEW
• Polarity markings are sometimes given to AC voltages in circuit schematics in order to provide a frame of reference for their phase angles. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/02%3A_Complex_Numbers/2.07%3A_More_on_AC_%E2%80%9Cpolarity%E2%80%9D.txt |
Let’s connect three AC voltage sources in series and use complex numbers to determine additive voltages. All the rules and laws learned in the study of DC circuits apply to AC circuits as well (Ohm’s Law, Kirchhoff’s Laws, network analysis methods), with the exception of power calculations (Joule’s Law). The only qualification is that all variables must be expressed in complex form, taking into account phase as well as magnitude, and all voltages and currents must be of the same frequency (in order that their phase relationships remain constant). (Figure below)
KVL allows addition of complex voltages.
The polarity marks for all three voltage sources are oriented in such a way that their stated voltages should add to make the total voltage across the load resistor. Notice that although magnitude and phase angle is given for each AC voltage source, no frequency value is specified. If this is the case, it is assumed that all frequencies are equal, thus meeting our qualifications for applying DC rules to an AC circuit (all figures given in complex form, all of the same frequency). The setup of our equation to find total voltage appears as such:
Graphically, the vectors add up as shown in Figure below.
Graphic addition of vector voltages.
The sum of these vectors will be a resultant vector originating at the starting point for the 22 volt vector (dot at upper-left of diagram) and terminating at the ending point for the 15 volt vector (arrow tip at the middle-right of the diagram): (Figure below)
Resultant is equivalent to the vector sum of the three original voltages.
In order to determine what the resultant vector’s magnitude and angle are without resorting to graphic images, we can convert each one of these polar-form complex numbers into rectangular form and add. Remember, we’re adding these figures together because the polarity marks for the three voltage sources are oriented in an additive manner:
In polar form, this equates to 36.8052 volts ∠ -20.5018o. What this means in real terms is that the voltage measured across these three voltage sources will be 36.8052 volts, lagging the 15 volt (0o phase reference) by 20.5018o. A voltmeter connected across these points in a real circuit would only indicate the polar magnitude of the voltage (36.8052 volts), not the angle. An oscilloscope could be used to display two voltage waveforms and thus provide a phase shift measurement, but not a voltmeter. The same principle holds true for AC ammeters: they indicate the polar magnitude of the current, not the phase angle.
This is extremely important in relating calculated figures of voltage and current to real circuits. Although rectangular notation is convenient for addition and subtraction, and was indeed the final step in our sample problem here, it is not very applicable to practical measurements. Rectangular figures must be converted to polar figures (specifically polar magnitude) before they can be related to actual circuit measurements.
We can use SPICE to verify the accuracy of our results. In this test circuit, the 10 kΩ resistor value is quite arbitrary. It’s there so that SPICE does not declare an open-circuit error and abort analysis. Also, the choice of frequencies for the simulation (60 Hz) is quite arbitrary, because resistors respond uniformly for all frequencies of AC voltage and current. There are other components (notably capacitors and inductors) which do not respond uniformly to different frequencies, but that is another subject! (Figure below)
Spice circuit schematic.
Sure enough, we get a total voltage of 36.81 volts ∠ -20.5o (with reference to the 15 volt source, whose phase angle was arbitrarily stated at zero degrees so as to be the “reference” waveform).
At first glance, this is counter-intuitive. How is it possible to obtain a total voltage of just over 36 volts with 15 volt, 12 volt, and 22 volt supplies connected in series? With DC, this would be impossible, as voltage figures will either directly add or subtract, depending on polarity. But with AC, our “polarity” (phase shift) can vary anywhere in between full-aiding and full-opposing, and this allows for such paradoxical summing.
What if we took the same circuit and reversed one of the supply’s connections? Its contribution to the total voltage would then be the opposite of what it was before: (Figure below)
Polarity of E2 (12V) is reversed.
Note how the 12 volt supply’s phase angle is still referred to as 35o, even though the leads have been reversed. Remember that the phase angle of any voltage drop is stated in reference to its noted polarity. Even though the angle is still written as 35o, the vector will be drawn 180o opposite of what it was before: (Figure below)
Direction of E2 is reversed.
The resultant (sum) vector should begin at the upper-left point (origin of the 22 volt vector) and terminate at the right arrow tip of the 15 volt vector: (Figure below)
Resultant is vector sum of voltage sources.
The connection reversal on the 12 volt supply can be represented in two different ways in polar form: by an addition of 180o to its vector angle (making it 12 volts ∠ 215o), or a reversal of sign on the magnitude (making it -12 volts ∠ 35o). Either way, conversion to rectangular form yields the same result:
The resulting addition of voltages in rectangular form, then:
In polar form, this equates to 30.4964 V ∠ -60.9368o. Once again, we will use SPICE to verify the results of our calculations:
REVIEW:
• All the laws and rules of DC circuits apply to AC circuits, with the exception of power calculations (Joule’s Law), so long as all values are expressed and manipulated in complex form, and all voltages and currents are at the same frequency.
• When reversing the direction of a vector (equivalent to reversing the polarity of an AC voltage source in relation to other voltage sources), it can be expressed in either of two different ways: adding 180o to the angle, or reversing the sign of the magnitude.
• Meter measurements in an AC circuit correspond to the polar magnitudes of calculated values. Rectangular expressions of complex quantities in an AC circuit have no direct, empirical equivalent, although they are convenient for performing addition and subtraction, as Kirchhoff’s Voltage and Current Laws require. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/02%3A_Complex_Numbers/2.08%3A_Some_Examples_with_AC_Circuits.txt |
• 3.1: AC Resistor Circuits (Inductive)
• 3.2: AC Inductor Circuits
Inductors do not behave the same way resistors do. Whereas resistors simply oppose the flow of electrons through them (by dropping a voltage directly proportional to the current), inductors oppose changes in current through them, by dropping a voltage directly proportional to the rate of change of current. In accordance with Lenz’s Law (which you can read more about here), this induced voltage is always of such a polarity as to try to maintain current at its present value.
• 3.3: Series Resistor-Inductor Circuits
In the previous section, we explored what would happen in simple resistor-only and inductor-only AC circuits. Now we will mix the two components together in series form and investigate the effects.
• 3.4: Parallel Resistor-Inductor Circuits
• 3.5: Inductor Quirks
• 3.6: What Is the Skin Effect? The Skin Depth of Copper in Electrical Engineering
The skin effect is where alternating current tends to avoid travel through the center of a solid conductor, limiting itself to conduction near the surface. This effectively limits the cross-sectional conductor area available to carry alternating electron flow, increasing the resistance of that conductor above what it would normally be for direct current.
03: Reactance and Impedance - Inductive
Pure resistive AC circuit: resistor voltage and current are in phase.
If we were to plot the current and voltage for a very simple AC circuit consisting of a source and a resistor (Figure above), it would look something like this: (Figure below)
Voltage and current “in phase” for resistive circuit.
Because the resistor simply and directly resists the flow of electrons at all periods of time, the waveform for the voltage drop across the resistor is exactly in phase with the waveform for the current through it. We can look at any point in time along the horizontal axis of the plot and compare those values of current and voltage with each other (any “snapshot” look at the values of a wave are referred to as instantaneous values, meaning the values at that instant in time). When the instantaneous value for current is zero, the instantaneous voltage across the resistor is also zero. Likewise, at the moment in time where the current through the resistor is at its positive peak, the voltage across the resistor is also at its positive peak, and so on. At any given point in time along the waves, Ohm’s Law holds true for the instantaneous values of voltage and current.
We can also calculate the power dissipated by this resistor, and plot those values on the same graph: (Figure below)
Instantaneous AC power in a pure resistive circuit is always positive.
3.02: AC Inductor Circuits
Resistors vs. Inductors
Inductors do not behave the same way resistors do. Whereas resistors simply oppose the flow of electrons through them (by dropping a voltage directly proportional to the current), inductors oppose changes in current through them, by dropping a voltage directly proportional to the rate of change of current. In accordance with Lenz’s Law, this induced voltage is always of such a polarity as to try to maintain current at its present value. That is, if current is increasing in magnitude, the induced voltage will “push against” the electron flow; if current is decreasing, the polarity will reverse and “push with” the electron flow to oppose the decrease. This opposition to current change is called reactance, rather than resistance.
Expressed mathematically, the relationship between the voltage dropped across the inductor and rate of current change through the inductor is as such:
\[e = L \dfrac{di}{dt}\]
Alternating Current in a Simple Inductive Circuit
The expression di/dt is one from calculus, meaning the rate of change of instantaneous current (i) over time, in amps per second. The inductance (L) is in Henrys, and the instantaneous voltage (e), of course, is in volts. Sometimes you will find the rate of instantaneous voltage expressed as “v” instead of “e” (v = L di/dt), but it means the exact same thing. To show what happens with alternating current, let’s analyze a simple inductor circuit: (Figure below)
Pure inductive circuit: Inductor current lags inductor voltage by 90o.
If we were to plot the current and voltage for this very simple circuit, it would look something like this: (Figure below)
Pure inductive circuit, waveforms.
Remember, the voltage dropped across an inductor is a reaction against the change in current through it. Therefore, the instantaneous voltage is zero whenever the instantaneous current is at a peak (zero change, or level slope, on the current sine wave), and the instantaneous voltage is at a peak wherever the instantaneous current is at maximum change (the points of steepest slope on the current wave, where it crosses the zero line). This results in a voltage wave that is 90o out of phase with the current wave. Looking at the graph, the voltage wave seems to have a “head start” on the current wave; the voltage “leads” the current, and the current “lags” behind the voltage. (Figure below)
Current lags voltage by 90o in a pure inductive circuit.
Things get even more interesting when we plot the power for this circuit: (Figure below)
In a pure inductive circuit, instantaneous power may be positive or negative
Because instantaneous power is the product of the instantaneous voltage and the instantaneous current (p=ie), the power equals zero whenever the instantaneous current or voltage is zero. Whenever the instantaneous current and voltage are both positive (above the line), the power is positive. As with the resistor example, the power is also positive when the instantaneous current and voltage are both negative (below the line). However, because the current and voltage waves are 90o out of phase, there are times when one is positive while the other is negative, resulting in equally frequent occurrences of negative instantaneous power.
What is Negative Power?
But what does negative power mean? It means that the inductor is releasing power back to the circuit, while a positive power means that it is absorbing power from the circuit. Since the positive and negative power cycles are equal in magnitude and duration over time, the inductor releases just as much power back to the circuit as it absorbs over the span of a complete cycle. What this means in a practical sense is that the reactance of an inductor dissipates a net energy of zero, quite unlike the resistance of a resistor, which dissipates energy in the form of heat. Mind you, this is for perfect inductors only, which have no wire resistance.
Reactance vs. Resistance
An inductor’s opposition to change in current translates to an opposition to alternating current in general, which is by definition always changing in instantaneous magnitude and direction. This opposition to alternating current is similar to resistance but different in that it always results in a phase shift between current and voltage, and it dissipates zero power. Because of the differences, it has a different name: reactance. Reactance to AC is expressed in ohms, just like resistance is, except that its mathematical symbol is X instead of R. To be specific, reactance associated with an inductor is usually symbolized by the capital letter X with a letter L as a subscript, like this: XL.
Since inductors drop voltage in proportion to the rate of current change, they will drop more voltage for faster-changing currents, and less voltage for slower-changing currents. What this means is that reactance in ohms for any inductor is directly proportional to the frequency of the alternating current. The exact formula for determining reactance is as follows:
If we expose a 10 mH inductor to frequencies of 60, 120, and 2500 Hz, it will manifest the reactances in the table below.
Reactance of a 10 mH inductor:
In the reactance equation, the term “2πf” (everything on the right-hand side except the L) has a special meaning unto itself. It is the number of radians per second that the alternating current is “rotating” at, if you imagine one cycle of AC to represent a full circle’s rotation. A radian is a unit of angular measurement: there are 2π radians in one full circle, just as there are 360o in a full circle. If the alternator producing the AC is a double-pole unit, it will produce one cycle for every full turn of shaft rotation, which is every 2π radians, or 360o. If this constant of 2π is multiplied by frequency in Hertz (cycles per second), the result will be a figure in radians per second, known as the angular velocity of the AC system.
Angular Velocity in AC Systems
Angular velocity may be represented by the expression 2πf, or it may be represented by its own symbol, the lower-case Greek letter Omega, which appears similar to our Roman lower-case “w”: ω. Thus, the reactance formula XL = 2πfL could also be written as XL = ωL.
It must be understood that this “angular velocity” is an expression of how rapidly the AC waveforms are cycling, a full cycle being equal to 2π radians. It is not necessarily representative of the actual shaft speed of the alternator producing the AC. If the alternator has more than two poles, the angular velocity will be a multiple of the shaft speed. For this reason, ω is sometimes expressed in units of electrical radians per second rather than (plain) radians per second, so as to distinguish it from mechanical motion.
Any way we express the angular velocity of the system, it is apparent that it is directly proportional to reactance in an inductor. As the frequency (or alternator shaft speed) is increased in an AC system, an inductor will offer greater opposition to the passage of current, and vice versa. Alternating current in a simple inductive circuit is equal to the voltage (in volts) divided by the inductive reactance (in ohms), just as either alternating or direct current in a simple resistive circuit is equal to the voltage (in volts) divided by the resistance (in ohms). An example circuit is shown here: (Figure below)
Inductive reactance
Phase Angles
However, we need to keep in mind that voltage and current are not in phase here. As was shown earlier, the voltage has a phase shift of +90o with respect to the current. (Figure below) If we represent these phase angles of voltage and current mathematically in the form of complex numbers, we find that an inductor’s opposition to current has a phase angle, too:
Current lags voltage by 90o in an inductor.
Mathematically, we say that the phase angle of an inductor’s opposition to current is 90o, meaning that an inductor’s opposition to current is a positive imaginary quantity. This phase angle of reactive opposition to current becomes critically important in circuit analysis, especially for complex AC circuits where reactance and resistance interact. It will prove beneficial to represent any component’s opposition to current in terms of complex numbers rather than scalar quantities of resistance and reactance.
REVIEW
• Inductive reactance is the opposition that an inductor offers to alternating current due to its phase-shifted storage and release of energy in its magnetic field. Reactance is symbolized by the capital letter “X” and is measured in ohms just like resistance (R).
• Inductive reactance can be calculated using this formula: XL = 2πfL
• The angular velocity of an AC circuit is another way of expressing its frequency, in units of electrical radians per second instead of cycles per second. It is symbolized by the lower-case Greek letter “omega,” or ω.
• Inductive reactance increases with increasing frequency. In other words, the higher the frequency, the more it opposes the AC flow of electrons. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/03%3A_Reactance_and_Impedance_-_Inductive/3.01%3A_AC_Resistor_Circuits_%28Inductive%29.txt |
In the previous section, we explored what would happen in simple resistor-only and inductor-only AC circuits. Now we will mix the two components together in series form and investigate the effects.
Series Resistor Inductor Circuit Example
Take this circuit as an example to work with: (Figure below)
Series resistor inductor circuit: Current lags applied voltage by 0o to 90o.
The resistor will offer 5 Ω of resistance to AC current regardless of frequency, while the inductor will offer 3.7699 Ω of reactance to AC current at 60 Hz. Because the resistor’s resistance is a real number (5 Ω ∠ 0o, or 5 + j0 Ω), and the inductor’s reactance is an imaginary number (3.7699 Ω ∠ 90o, or 0 + j3.7699 Ω), the combined effect of the two components will be an opposition to current equal to the complex sum of the two numbers. This combined opposition will be a vector combination of resistance and reactance. In order to express this opposition succinctly, we need a more comprehensive term for opposition to current than either resistance or reactance alone. This term is called impedance, its symbol is Z, and it is also expressed in the unit of ohms, just like resistance and reactance. In the above example, the total circuit impedance is:
Resistance in Ohm’s Law
Impedance is related to voltage and current just as you might expect, in a manner similar to resistance in Ohm’s Law:
In fact, this is a far more comprehensive form of Ohm’s Law than what was taught in DC electronics (E=IR), just as impedance is a far more comprehensive expression of opposition to the flow of electrons than resistance is. Any resistance and any reactance, separately or in combination (series/parallel), can be and should be represented as a single impedance in an AC circuit.
To calculate current in the above circuit, we first need to give a phase angle reference for the voltage source, which is generally assumed to be zero. (The phase angles of resistive and inductive impedance are always 0o and +90o, respectively, regardless of the given phase angles for voltage or current).
As with the purely inductive circuit, the current wave lags behind the voltage wave (of the source), although this time the lag is not as great: only 37.016o as opposed to a full 90o as was the case in the purely inductive circuit. (Figure below)
Current lags voltage in a series L-R circuit. For the resistor and the inductor, the phase relationships between voltage and current haven’t changed. Voltage across the resistor is in phase (0o shift) with the current through it; and the voltage across the inductor is +90o out of phase with the current going through it. We can verify this mathematically:
The voltage across the resistor has the exact same phase angle as the current through it, telling us that E and I are in phase (for the resistor only).
The voltage across the inductor has a phase angle of 52.984o, while the current through the inductor has a phase angle of -37.016o, a difference of exactly 90o between the two. This tells us that E and I are still 90oout of phase (for the inductor only).
Use the Kirchhoff’s Voltage Law
We can also mathematically prove that these complex values add together to make the total voltage, just as Kirchhoff’s Voltage Law would predict:
Let’s check the validity of our calculations with SPICE: (Figure below)
Spice circuit: R-L.
Note that just as with DC circuits, SPICE outputs current figures as though they were negative (180o out of phase) with the supply voltage. Instead of a phase angle of -37.016o, we get a current phase angle of 143o(-37o + 180o). This is merely an idiosyncrasy of SPICE and does not represent anything significant in the circuit simulation itself. Note how both the resistor and inductor voltage phase readings match our calculations (-37.02o and 52.98o, respectively), just as we expected them to.
With all these figures to keep track of for even such a simple circuit as this, it would be beneficial for us to use the “table” method. Applying a table to this simple series resistor-inductor circuit would proceed as such. First, draw up a table for E/I/Z figures and insert all component values in these terms (in other words, don’t insert actual resistance or inductance values in Ohms and Henrys, respectively, into the table; rather, convert them into complex figures of impedance and write those in):
Although it isn’t necessary, I find it helpful to write both the rectangular and polar forms of each quantity in the table. If you are using a calculator that has the ability to perform complex arithmetic without the need for conversion between rectangular and polar forms, then this extra documentation is completely unnecessary. However, if you are forced to perform complex arithmetic “longhand” (addition and subtraction in rectangular form, and multiplication and division in polar form), writing each quantity in both forms will be useful indeed.
Now that our “given” figures are inserted into their respective locations in the table, we can proceed just as with DC: determine the total impedance from the individual impedances. Since this is a series circuit, we know that opposition to electron flow (resistance or impedance) adds to form the total opposition:
Now that we know total voltage and total impedance, we can apply Ohm’s Law (I=E/Z) to determine total current:
Just as with DC, the total current in a series AC circuit is shared equally by all components. This is still true because in a series circuit there is only a single path for electrons to flow, therefore the rate of their flow must uniform throughout. Consequently, we can transfer the figures for current into the columns for the resistor and inductor alike:
And with that, our table is complete. The exact same rules we applied in the analysis of DC circuits apply to AC circuits as well, with the caveat that all quantities must be represented and calculated in complex rather than scalar form. So long as phase shift is properly represented in our calculations, there is no fundamental difference in how we approach basic AC circuit analysis versus DC.
Now is a good time to review the relationship between these calculated figures and readings given by actual instrument measurements of voltage and current. The figures here that directly relate to real-life measurements are those in polar notation, not rectangular! In other words, if you were to connect a voltmeter across the resistor in this circuit, it would indicate 7.9847 volts, not 6.3756 (real rectangular) or 4.8071 (imaginary rectangular) volts. To describe this in graphical terms, measurement instruments simply tell you how long the vector is for that particular quantity (voltage or current).
Rectangular notation, while convenient for arithmetical addition and subtraction, is a more abstract form of notation than polar in relation to real-world measurements. As I stated before, I will indicate both polar and rectangular forms of each quantity in my AC circuit tables simply for convenience of mathematical calculation. This is not absolutely necessary, but may be helpful for those following along without the benefit of an advanced calculator. If we were to restrict ourselves to the use of only one form of notation, the best choice would be polar, because it is the only one that can be directly correlated to real measurements.
Impedance (\(Z\)) of a series R-L circuit may be calculated, given the resistance (\(R\)) and the inductive reactance (XL). Since E=IR, E=IXL, and E=IZ, resistance, reactance, and impedance are proportional to voltage, respectively. Thus, the voltage phasor diagram can be replaced by a similar impedance diagram. (Figure below)
Series: R-L circuit Impedance phasor diagram.
Example \(1\):
Given: A 40 Ω resistor in series with a 79.58 millihenry inductor. Find the impedance at 60 hertz.
REVIEW
• Impedance is the total measure of opposition to electric current and is the complex (vector) sum of (“real”) resistance and (“imaginary”) reactance. It is symbolized by the letter “Z” and measured in ohms, just like resistance (R) and reactance (X).
• Impedances (Z) are managed just like resistances (R) in series circuit analysis: series impedances add to form the total impedance. Just be sure to perform all calculations in complex (not scalar) form! ZTotal = Z1 + Z2 + . . . Zn
• A purely resistive impedance will always have a phase angle of exactly 0o (ZR = R Ω ∠ 0o).
• A purely inductive impedance will always have a phase angle of exactly +90o (ZL = XL Ω ∠ 90o).
• Ohm’s Law for AC circuits: E = IZ ; I = E/Z ; Z = E/I
• When resistors and inductors are mixed together in circuits, the total impedance will have a phase angle somewhere between 0o and +90o. The circuit current will have a phase angle somewhere between 0oand -90o.
• Series AC circuits exhibit the same fundamental properties as series DC circuits: current is uniform throughout the circuit, voltage drops add to form the total voltage, and impedances add to form the total impedance. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/03%3A_Reactance_and_Impedance_-_Inductive/3.03%3A_Series_Resistor-Inductor_Circuits.txt |
Let’s take the same components for our series example circuit and connect them in parallel: (Figure below)
Parallel R-L circuit.
Because the power source has the same frequency as the series example circuit, and the resistor and inductor both have the same values of resistance and inductance, respectively, they must also have the same values of impedance. So, we can begin our analysis table with the same “given” values:
The only difference in our analysis technique this time is that we will apply the rules of parallel circuits instead of the rules for series circuits. The approach is fundamentally the same as for DC. We know that voltage is shared uniformly by all components in a parallel circuit, so we can transfer the figure of total voltage (10 volts ∠ 0o) to all components columns:
Now we can apply Ohm’s Law (I=E/Z) vertically to two columns of the table, calculating current through the resistor and current through the inductor:
Just as with DC circuits, branch currents in a parallel AC circuit add to form the total current (Kirchhoff’s Current Law still holds true for AC as it did for DC):
Finally, total impedance can be calculated by using Ohm’s Law (Z=E/I) vertically in the “Total” column. Incidentally, parallel impedance can also be calculated by using a reciprocal formula identical to that used in calculating parallel resistances.
The only problem with using this formula is that it typically involves a lot of calculator keystrokes to carry out. And if you’re determined to run through a formula like this “longhand,” be prepared for a very large amount of work! But, just as with DC circuits, we often have multiple options in calculating the quantities in our analysis tables, and this example is no different. No matter which way you calculate total impedance (Ohm’s Law or the reciprocal formula), you will arrive at the same figure:
Review
• Impedances (Z) are managed just like resistances (R) in parallel circuit analysis: parallel impedances diminish to form the total impedance, using the reciprocal formula. Just be sure to perform all calculations in complex (not scalar) form! ZTotal = 1/(1/Z1 + 1/Z2 + . . . 1/Zn)
• Ohm’s Law for AC circuits: E = IZ ; I = E/Z ; Z = E/I
• When resistors and inductors are mixed together in parallel circuits (just as in series circuits), the total impedance will have a phase angle somewhere between 0o and +90o. The circuit current will have a phase angle somewhere between 0o and -90o.
• Parallel AC circuits exhibit the same fundamental properties as parallel DC circuits: voltage is uniform throughout the circuit, branch currents add to form the total current, and impedances diminish (through the reciprocal formula) to form the total impedance.
3.05: Inductor Quirks
In an ideal case, an inductor acts as a purely reactive device. That is, its opposition to AC current is strictly based on inductive reaction to changes in current, and not electron friction as is the case with resistive components. However, inductors are not quite so pure in their reactive behavior. To begin with, they’re made of wire, and we know that all wire possesses some measurable amount of resistance (unless its superconducting wire). This built-in resistance acts as though it were connected in series with the perfect inductance of the coil, like this: (Figure below)
Inductor Equivalent circuit of a real inductor.
Consequently, the impedance of any real inductor will always be a complex combination of resistance and inductive reactance.
Compounding this problem is something called the skin effect, which is AC’s tendency to flow through the outer areas of a conductor’s cross-section rather than through the middle. When electrons flow in a single direction (DC), they use the entire cross-sectional area of the conductor to move. Electrons switching directions of flow, on the other hand, tend to avoid travel through the very middle of a conductor, limiting the effective cross-sectional area available. The skin effect becomes more pronounced as frequency increases.
Also, the alternating magnetic field of an inductor energized with AC may radiate off into space as part of an electromagnetic wave, especially if the AC is of high frequency. This radiated energy does not return to the inductor, and so it manifests itself as resistance (power dissipation) in the circuit.
Added to the resistive losses of wire and radiation, there are other effects at work in iron-core inductors which manifest themselves as additional resistance between the leads. When an inductor is energized with AC, the alternating magnetic fields produced tend to induce circulating currents within the iron core known as eddy currents. These electric currents in the iron core have to overcome the electrical resistance offered by the iron, which is not as good a conductor as copper. Eddy current losses are primarily counteracted by dividing the iron core up into many thin sheets (laminations), each one separated from the other by a thin layer of electrically insulating varnish. With the cross-section of the core divided up into many electrically isolated sections, current cannot circulate within that cross-sectional area and there will be no (or very little) resistive losses from that effect.
As you might have expected, eddy current losses in metallic inductor cores manifest themselves in the form of heat. The effect is more pronounced at higher frequencies, and can be so extreme that it is sometimes exploited in manufacturing processes to heat metal objects! In fact, this process of “inductive heating” is often used in high-purity metal foundry operations, where metallic elements and alloys must be heated in a vacuum environment to avoid contamination by air, and thus where standard combustion heating technology would be useless. It is a “non-contact” technology, the heated substance not having to touch the coil(s) producing the magnetic field.
In high-frequency service, eddy currents can even develop within the cross-section of the wire itself, contributing to additional resistive effects. To counteract this tendency, special wire made of very fine, individually insulated strands called Litz wire (short for Litzendraht) can be used. The insulation separating strands from each other prevent eddy currents from circulating through the whole wire’s cross-sectional area.
Additionally, any magnetic hysteresis that needs to be overcome with every reversal of the inductor’s magnetic field constitutes an expenditure of energy that manifests itself as resistance in the circuit. Some core materials (such as ferrite) are particularly notorious for their hysteretic effect. Counteracting this effect is best done by means of proper core material selection and limits on the peak magnetic field intensity generated with each cycle.
Altogether, the stray resistive properties of a real inductor (wire resistance, radiation losses, eddy currents, and hysteresis losses) are expressed under the single term of “effective resistance:” (Figure below)
Equivalent circuit of a real inductor with skin-effect, radiation, eddy current, and hysteresis losses.
It is worthy to note that the skin effect and radiation losses apply just as well to straight lengths of wire in an AC circuit as they do a coiled wire. Usually their combined effect is too small to notice, but at radio frequencies they can be quite large. A radio transmitter antenna, for example, is designed with the express purpose of dissipating the greatest amount of energy in the form of electromagnetic radiation.
Effective resistance in an inductor can be a serious consideration for the AC circuit designer. To help quantify the relative amount of effective resistance in an inductor, another value exists called the Q factor, or “quality factor” which is calculated as follows:
The symbol “Q” has nothing to do with electric charge (coulombs), which tends to be confusing. For some reason, the Powers That Be decided to use the same letter of the alphabet to denote a totally different quantity.
The higher the value for “Q,” the “purer” the inductor is. Because its so easy to add additional resistance if needed, a high-Q inductor is better than a low-Q inductor for design purposes. An ideal inductor would have a Q of infinity, with zero effective resistance.
Because inductive reactance (X) varies with frequency, so will Q. However, since the resistive effects of inductors (wire skin effect, radiation losses, eddy current, and hysteresis) also vary with frequency, Q does not vary proportionally with reactance. In order for a Q value to have precise meaning, it must be specified at a particular test frequency
3.06: What Is the Skin Effect? The Skin Depth of Copper in Electrical Engineering
What Is the Skin Effect?
As previously mentioned, the skin effect is where alternating current tends to avoid travel through the center of a solid conductor, limiting itself to conduction near the surface. This effectively limits the cross-sectional conductor area available to carry alternating electron flow, increasing the resistance of that conductor above what it would normally be for direct current: (Figure below)
Skin effect: skin depth decreases with increasing frequency.
The electrical resistance of the conductor with all its cross-sectional area in use is known as the “DC resistance.” The “AC resistance” of the same conductor refers to a higher figure resulting from the skin effect. As you can see, at high frequencies the AC current avoids travel through most of the conductor’s cross-sectional area. For the purpose of conducting current, the wire might as well be hollow!
Hollow Conductors in RF Applications
In some radio applications (antennas, most notably) this effect is exploited. Since radio-frequency (“RF”) AC currents wouldn’t travel through the middle of a conductor anyway, why not just use hollow metal rods instead of solid metal wires and save both weight and cost? (Figure below) Most antenna structures and RF power conductors are made of hollow metal tubes for this reason.
In the following photograph, you can see some large inductors used in a 50 kW radio transmitting circuit. The inductors are hollow copper tubes coated with silver, for excellent conductivity at the “skin” of the tube:
High power inductors formed from hollow tubes.
How Wire Gauge Affects Frequency and Effective Resistance
The degree to which frequency affects the effective resistance of a solid wire conductor is impacted by the gauge of that wire. As a rule, large-gauge wires exhibit a more pronounced skin effect (change in resistance from DC) than small-gauge wires at any given frequency. The equation for approximating skin effect at high frequencies (greater than 1 MHz) is as follows:
$R_{A C}=\left(R_{D C}\right)(k) \sqrt{f} \label{1}$
where
• $R_{AC}$ is the AC resistance at a given frequency ($f$)
• $R_{DC}$ is the resistance at zero frequency (e.g., DC)
• $k$ is the wire-gague factor (see table below)
• $f$ is the frequency of AC in MHZ
The table below gives approximate values of “k” factor for various round wire sizes.
Table 1: “k” factor for various AWG wire sizes.
Example $1$
What is the AC (effective) resistance of a length of number 10-gauge wire with a DC end-to-end resistance of 25 Ω at a frequency of 10 MHz?
Solution
This is a direct application of Equation \ref{1}
\begin{align*} R_{A C} &=\left(R_{D C}\right)(k) \sqrt{f} \[4pt] &=(25 \Omega)(27.6) \sqrt{10} \[4pt] &=2.182 \mathrm{k} \Omega \end{align*}
Hence, this wire would have an AC (effective) resistance of 2.182 kΩ. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/03%3A_Reactance_and_Impedance_-_Inductive/3.04%3A_Parallel_Resistor-Inductor_Circuits.txt |
Pure resistive AC circuit: voltage and current are in phase.
If we were to plot the current and voltage for a very simple AC circuit consisting of a source and a resistor, (Figure above) it would look something like this: (Figure below)
Voltage and current “in phase” for resistive circuit.
Because the resistor allows an amount of current directly proportional to the voltage across it at all periods of time, the waveform for the current is exactly in phase with the waveform for the voltage. We can look at any point in time along the horizontal axis of the plot and compare those values of current and voltage with each other (any “snapshot” look at the values of a wave are referred to as instantaneous values, meaning the values at that instant in time). When the instantaneous value for voltage is zero, the instantaneous current through the resistor is also zero. Likewise, at the moment in time where the voltage across the resistor is at its positive peak, the current through the resistor is also at its positive peak, and so on. At any given point in time along the waves, Ohm’s Law holds true for the instantaneous values of voltage and current.
We can also calculate the power dissipated by this resistor, and plot those values on the same graph: (Figure below)
Instantaneous AC power in a resistive circuit is always positive.
4.02: AC Capacitor Circuits
Capacitors Vs. Resistors
Capacitors do not behave the same as resistors. Whereas resistors allow a flow of electrons through them directly proportional to the voltage drop, capacitors oppose changes in voltage by drawing or supplying current as they charge or discharge to the new voltage level. The flow of electrons “through” a capacitor is directly proportional to the rate of change of voltage across the capacitor. This opposition to voltage change is another form of reactance, but one that is precisely opposite to the kind exhibited by inductors.
Capacitor Circuit Characteristics
Expressed mathematically, the relationship between the current “through” the capacitor and rate of voltage change across the capacitor is as such:
The expression de/dt is one from calculus, meaning the rate of change of instantaneous voltage (e) over time, in volts per second. The capacitance (C) is in Farads, and the instantaneous current (i), of course, is in amps. Sometimes you will find the rate of instantaneous voltage change over time expressed as dv/dt instead of de/dt: using the lower-case letter “v” instead or “e” to represent voltage, but it means the exact same thing. To show what happens with alternating current, let’s analyze a simple capacitor circuit: (Figure below)
Pure capacitive circuit: capacitor voltage lags capacitor current by 90o
If we were to plot the current and voltage for this very simple circuit, it would look something like this: (Figure below)
Pure capacitive circuit waveforms.
Remember, the current through a capacitor is a reaction against the change in voltage across it. Therefore, the instantaneous current is zero whenever the instantaneous voltage is at a peak (zero change, or level slope, on the voltage sine wave), and the instantaneous current is at a peak wherever the instantaneous voltage is at maximum change (the points of steepest slope on the voltage wave, where it crosses the zero line). This results in a voltage wave that is -90o out of phase with the current wave. Looking at the graph, the current wave seems to have a “head start” on the voltage wave; the current “leads” the voltage, and the voltage “lags” behind the current. (Figure below)
Voltage lags current by 90o in a pure capacitive circuit.
As you might have guessed, the same unusual power wave that we saw with the simple inductor circuit is present in the simple capacitor circuit, too: (Figure below)
In a pure capacitive circuit, the instantaneous power may be positive or negative.
As with the simple inductor circuit, the 90-degree phase shift between voltage and current results in a power wave that alternates equally between positive and negative. This means that a capacitor does not dissipate power as it reacts against changes in voltage; it merely absorbs and releases power, alternately.
A Capacitor’s Reactance
A capacitor’s opposition to change in voltage translates to an opposition to alternating voltage in general, which is by definition always changing in instantaneous magnitude and direction. For any given magnitude of AC voltage at a given frequency, a capacitor of given size will “conduct” a certain magnitude of AC current. Just as the current through a resistor is a function of the voltage across the resistor and the resistance offered by the resistor, the AC current through a capacitor is a function of the AC voltage across it, and the reactance offered by the capacitor. As with inductors, the reactance of a capacitor is expressed in ohms and symbolized by the letter X (or XC to be more specific).
Since capacitors “conduct” current in proportion to the rate of voltage change, they will pass more current for faster-changing voltages (as they charge and discharge to the same voltage peaks in less time), and less current for slower-changing voltages. What this means is that reactance in ohms for any capacitor is inversely proportional to the frequency of the alternating current. (Table below)
Reactance of a 100 uF capacitor:
Please note that the relationship of capacitive reactance to frequency is exactly opposite from that of inductive reactance. Capacitive reactance (in ohms) decreases with increasing AC frequency. Conversely, inductive reactance (in ohms) increases with increasing AC frequency. Inductors oppose faster changing currents by producing greater voltage drops; capacitors oppose faster changing voltage drops by allowing greater currents.
As with inductors, the reactance equation’s 2πf term may be replaced by the lower-case Greek letter Omega (ω), which is referred to as the angular velocity of the AC circuit. Thus, the equation XC = 1/(2πfC) could also be written as XC = 1/(ωC), with ω cast in units of radians per second.
Alternating current in a simple capacitive circuit is equal to the voltage (in volts) divided by the capacitive reactance (in ohms), just as either alternating or direct current in a simple resistive circuit is equal to the voltage (in volts) divided by the resistance (in ohms). The following circuit illustrates this mathematical relationship by example: (Figure below)
Capacitive reactance.
However, we need to keep in mind that voltage and current are not in phase here. As was shown earlier, the current has a phase shift of +90o with respect to the voltage. If we represent these phase angles of voltage and current mathematically, we can calculate the phase angle of the capacitor’s reactive opposition to current.
Voltage lags current by 90o in a capacitor.
Mathematically, we say that the phase angle of a capacitor’s opposition to current is -90o, meaning that a capacitor’s opposition to current is a negative imaginary quantity. (Figure above) This phase angle of reactive opposition to current becomes critically important in circuit analysis, especially for complex AC circuits where reactance and resistance interact. It will prove beneficial to represent any component’s opposition to current in terms of complex numbers, and not just scalar quantities of resistance and reactance.
Review
• Capacitive reactance is the opposition that a capacitor offers to alternating current due to its phase-shifted storage and release of energy in its electric field. Reactance is symbolized by the capital letter “X” and is measured in ohms just like resistance (R).
• Capacitive reactance can be calculated using this formula: XC = 1/(2πfC)
• Capacitive reactance decreases with increasing frequency. In other words, the higher the frequency, the less it opposes (the more it “conducts”) the AC flow of electrons. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/04%3A_Reactance_And_Impedance_-_Capacitive/4.01%3A_AC_Resistor_Circuits_%28Capacitive%29.txt |
In the last section, we learned what would happen in simple resistor-only and capacitor-only AC circuits. Now we will combine the two components together in series form and investigate the effects. (Figure below)
Series capacitor circuit: voltage lags current by 0o to 90o.
The resistor will offer 5 Ω of resistance to AC current regardless of frequency, while the capacitor will offer 26.5258 Ω of reactance to AC current at 60 Hz. Because the resistor’s resistance is a real number (5 Ω ∠ 0o, or 5 + j0 Ω), and the capacitor’s reactance is an imaginary number (26.5258 Ω ∠ -90o, or 0 - j26.5258 Ω), the combined effect of the two components will be an opposition to current equal to the complex sum of the two numbers. The term for this complex opposition to current is impedance, its symbol is Z, and it is also expressed in the unit of ohms, just like resistance and reactance. In the above example, the total circuit impedance is:
Impedance is related to voltage and current just as you might expect, in a manner similar to resistance in Ohm’s Law:
In fact, this is a far more comprehensive form of Ohm’s Law than what was taught in DC electronics (E=IR), just as impedance is a far more comprehensive expression of opposition to the flow of electrons than simple resistance is. Any resistance and any reactance, separately or in combination (series/parallel), can be and should be represented as a single impedance.
To calculate current in the above circuit, we first need to give a phase angle reference for the voltage source, which is generally assumed to be zero. (The phase angles of resistive and capacitive impedance are always 0o and -90o, respectively, regardless of the given phase angles for voltage or current).
As with the purely capacitive circuit, the current wave is leading the voltage wave (of the source), although this time the difference is 79.325o instead of a full 90o. (Figure below)
Voltage lags current (current leads voltage)in a series R-C circuit.
As we learned in the AC inductance chapter, the “table” method of organizing circuit quantities is a very useful tool for AC analysis just as it is for DC analysis. Let’s place out known figures for this series circuit into a table and continue the analysis using this tool:
Current in a series circuit is shared equally by all components, so the figures placed in the “Total” column for current can be distributed to all other columns as well:
Continuing with our analysis, we can apply Ohm’s Law (E=IR) vertically to determine voltage across the resistor and capacitor:
Notice how the voltage across the resistor has the exact same phase angle as the current through it, telling us that E and I are in phase (for the resistor only). The voltage across the capacitor has a phase angle of -10.675o, exactly 90o less than the phase angle of the circuit current. This tells us that the capacitor’s voltage and current are still 90o out of phase with each other.
Let’s check our calculations with SPICE: (Figure below)
Spice circuit: R-C.
Once again, SPICE confusingly prints the current phase angle at a value equal to the real phase angle plus 180o (or minus 180o). However, its a simple matter to correct this figure and check to see if our work is correct. In this case, the -100.7o output by SPICE for current phase angle equates to a positive 79.3o, which does correspond to our previously calculated figure of 79.325o.
Again, it must be emphasized that the calculated figures corresponding to real-life voltage and current measurements are those in polar form, not rectangular form! For example, if we were to actually build this series resistor-capacitor circuit and measure voltage across the resistor, our voltmeter would indicate 1.8523 volts, not 343.11 millivolts (real rectangular) or 1.8203 volts (imaginary rectangular). Real instruments connected to real circuits provide indications corresponding to the vector length (magnitude) of the calculated figures. While the rectangular form of complex number notation is useful for performing addition and subtraction, it is a more abstract form of notation than polar, which alone has direct correspondence to true measurements.
Impedance (Z) of a series R-C circuit may be calculated, given the resistance (R) and the capacitive reactance (XC). Since E=IR, E=IXC, and E=IZ, resistance, reactance, and impedance are proportional to voltage, respectively. Thus, the voltage phasor diagram can be replaced by a similar impedance diagram. (Figure below)
Series: R-C circuit Impedance phasor diagram.
Example:
Given: A 40 Ω resistor in series with a 88.42 microfarad capacitor. Find the impedance at 60 hertz.
Review
• Impedance is the total measure of opposition to electric current and is the complex (vector) sum of (“real”) resistance and (“imaginary”) reactance.
• Impedances (Z) are managed just like resistances (R) in series circuit analysis: series impedances add to form the total impedance. Just be sure to perform all calculations in complex (not scalar) form! ZTotal = Z1 + Z2 + . . . Zn
• Please note that impedances always add in series, regardless of what type of components comprise the impedances. That is, resistive impedance, inductive impedance, and capacitive impedance are to be treated the same way mathematically.
• A purely resistive impedance will always have a phase angle of exactly 0o (ZR = R Ω ∠ 0o).
• A purely capacitive impedance will always have a phase angle of exactly -90o (ZC = XC Ω ∠ -90o).
• Ohm’s Law for AC circuits: E = IZ ; I = E/Z ; Z = E/I
• When resistors and capacitors are mixed together in circuits, the total impedance will have a phase angle somewhere between 0o and -90o. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/04%3A_Reactance_And_Impedance_-_Capacitive/4.03%3A_Series_Resistor-Capacitor_Circuits.txt |
Using the same value components in our series example circuit, we will connect them in parallel and see what happens: (Figure below)
Parallel R-C circuit.
Resistor and Capacitor in Parallel
Because the power source has the same frequency as the series example circuit, and the resistor and capacitor both have the same values of resistance and capacitance, respectively, they must also have the same values of impedance. So, we can begin our analysis table with the same “given” values:
This being a parallel circuit now, we know that voltage is shared equally by all components, so we can place the figure for total voltage (10 volts ∠ 0o) in all the columns:
Calculation Using Ohm’s Law
Now we can apply Ohm’s Law (I=E/Z) vertically to two columns in the table, calculating current through the resistor and current through the capacitor:
Just as with DC circuits, branch currents in a parallel AC circuit add up to form the total current (Kirchhoff’s Current Law again):
Finally, total impedance can be calculated by using Ohm’s Law (Z=E/I) vertically in the “Total” column. As we saw in the AC inductance chapter, parallel impedance can also be calculated by using a reciprocal formula identical to that used in calculating parallel resistances. It is noteworthy to mention that this parallel impedance rule holds true regardless of the kind of impedances placed in parallel. In other words, it doesn’t matter if we’re calculating a circuit composed of parallel resistors, parallel inductors, parallel capacitors, or some combination thereof: in the form of impedances (Z), all the terms are common and can be applied uniformly to the same formula. Once again, the parallel impedance formula looks like this:
The only drawback to using this equation is the significant amount of work required to work it out, especially without the assistance of a calculator capable of manipulating complex quantities. Regardless of how we calculate total impedance for our parallel circuit (either Ohm’s Law or the reciprocal formula), we will arrive at the same figure:
Review
• Impedances (Z) are managed just like resistances (R) in parallel circuit analysis: parallel impedances diminish to form the total impedance, using the reciprocal formula. Just be sure to perform all calculations in complex (not scalar) form! ZTotal = 1/(1/Z1 + 1/Z2 + . . . 1/Zn)
• Ohm’s Law for AC circuits: E = IZ ; I = E/Z ; Z = E/I
• When resistors and capacitors are mixed together in parallel circuits (just as in series circuits), the total impedance will have a phase angle somewhere between 0o and -90o. The circuit current will have a phase angle somewhere between 0o and +90o.
• Parallel AC circuits exhibit the same fundamental properties as parallel DC circuits: voltage is uniform throughout the circuit, branch currents add to form the total current, and impedances diminish (through the reciprocal formula) to form the total impedance.
4.05: Capacitor Quirks
As with inductors, the ideal capacitor is a purely reactive device, containing absolutely zero resistive (power dissipative) effects. In the real world, of course, nothing is so perfect. However, capacitors have the virtue of generally being purer reactive components than inductors. It is a lot easier to design and construct a capacitor with low internal series resistance than it is to do the same with an inductor. The practical result of this is that real capacitors typically have impedance phase angles more closely approaching 90o (actually, -90o) than inductors. Consequently, they will tend to dissipate less power than an equivalent inductor.
Capacitors also tend to be smaller and lighter weight than their equivalent inductor counterparts, and since their electric fields are almost totally contained between their plates (unlike inductors, whose magnetic fields naturally tend to extend beyond the dimensions of the core), they are less prone to transmitting or receiving electromagnetic “noise” to/from other components. For these reasons, circuit designers tend to favor capacitors over inductors wherever a design permits either alternative.
Capacitors with significant resistive effects are said to be lossy, in reference to their tendency to dissipate (“lose”) power like a resistor. The source of capacitor loss is usually the dielectric material rather than any wire resistance, as wire length in a capacitor is very minimal.
Dielectric materials tend to react to changing electric fields by producing heat. This heating effect represents a loss in power and is equivalent to resistance in the circuit. The effect is more pronounced at higher frequencies and in fact can be so extreme that it is sometimes exploited in manufacturing processes to heat insulating materials like plastic! The plastic object to be heated is placed between two metal plates, connected to a source of high-frequency AC voltage. Temperature is controlled by varying the voltage or frequency of the source, and the plates never have to contact the object being heated.
This effect is undesirable for capacitors where we expect the component to behave as a purely reactivecircuit element. One of the ways to mitigate the effect of dielectric “loss” is to choose a dielectric material less susceptible to the effect. Not all dielectric materials are equally “lossy.” A relative scale of dielectric loss from least to greatest is given in Table below.
Dielectric loss
Dielectric resistivity manifests itself both as a series and a parallel resistance with the pure capacitance: (Figure below)
Real capacitor has both series and parallel resistance.
Fortunately, these stray resistances are usually of modest impact (low series resistance and high parallel resistance), much less significant than the stray resistances present in an average inductor. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/04%3A_Reactance_And_Impedance_-_Capacitive/4.04%3A_Parallel_Resistor-Capacitor_Circuits.txt |
Before we begin to explore the effects of resistors, inductors, and capacitors connected together in the same AC circuits, let’s briefly review some basic terms and facts.
Resistance is essentially friction against the motion of electrons. It is present in all conductors to some extent (except superconductors!), most notably in resistors. When alternating current goes through a resistance, a voltage drop is produced that is in-phase with the current. Resistance is mathematically symbolized by the letter “R” and is measured in the unit of ohms (Ω).
Reactance is essentially inertia against the motion of electrons. It is present anywhere electric or magnetic fields are developed in proportion to applied voltage or current, respectively; but most notably in capacitors and inductors. When alternating current goes through a pure reactance, a voltage drop is produced that is 90o out of phase with the current. Reactance is mathematically symbolized by the letter “X” and is measured in the unit of ohms (Ω).
Impedance is a comprehensive expression of any and all forms of opposition to electron flow, including both resistance and reactance. It is present in all circuits, and in all components. When alternating current goes through an impedance, a voltage drop is produced that is somewhere between 0o and 90o out of phase with the current. Impedance is mathematically symbolized by the letter “Z” and is measured in the unit of ohms (Ω), in complex form.
Perfect resistors (Figure below) possess resistance, but not reactance. Perfect inductors and perfect capacitors (Figure below) possess reactance but no resistance. All components possess impedance, and because of this universal quality, it makes sense to translate all component values (resistance, inductance, capacitance) into common terms of impedance as the first step in analyzing an AC circuit.
Perfect resistor, inductor, and capacitor.
The impedance phase angle for any component is the phase shift between voltage across that component and current through that component. For a perfect resistor, the voltage drop and current are always in phase with each other, and so the impedance angle of a resistor is said to be 0o. For an perfect inductor, voltage drop always leads current by 90o, and so an inductor’s impedance phase angle is said to be +90o. For a perfect capacitor, voltage drop always lags current by 90o, and so a capacitor’s impedance phase angle is said to be -90o.
Impedances in AC behave analogously to resistances in DC circuits: they add in series, and they diminish in parallel. A revised version of Ohm’s Law, based on impedance rather than resistance, looks like this:
5.02: Series R, L, and C
Let’s take the following example circuit and analyze it: (Figure below)
Example series R, L, and C circuit. The first step is to determine the reactances (in ohms) for the inductor and the capacitor.
The next step is to express all resistances and reactances in a mathematically common form: impedance. (Figure below) Remember that an inductive reactance translates into a positive imaginary impedance (or an impedance at +90o), while a capacitive reactance translates into a negative imaginary impedance (impedance at -90o). Resistance, of course, is still regarded as a purely “real” impedance (polar angle of 0o):
Example series R, L, and C circuit with component values replaced by impedances.
Now, with all quantities of opposition to electric current expressed in a common, complex number format (as impedances, and not as resistances or reactances), they can be handled in the same way as plain resistances in a DC circuit. This is an ideal time to draw up an analysis table for this circuit and insert all the “given” figures (total voltage, and the impedances of the resistor, inductor, and capacitor).
Unless otherwise specified, the source voltage will be our reference for phase shift, and so will be written at an angle of 0o. Remember that there is no such thing as an “absolute” angle of phase shift for a voltage or current, since its always a quantity relative to another waveform. Phase angles for impedance, however (like those of the resistor, inductor, and capacitor), are known absolutely, because the phase relationships between voltage and current at each component are absolutely defined.
Notice that I’m assuming a perfectly reactive inductor and capacitor, with impedance phase angles of exactly +90 and -90o, respectively. Although real components won’t be perfect in this regard, they should be fairly close. For simplicity, I’ll assume perfectly reactive inductors and capacitors from now on in my example calculations except where noted otherwise.
Since the above example circuit is a series circuit, we know that the total circuit impedance is equal to the sum of the individuals, so:
Inserting this figure for total impedance into our table:
We can now apply Ohm’s Law (I=E/R) vertically in the “Total” column to find total current for this series circuit:
Being a series circuit, current must be equal through all components. Thus, we can take the figure obtained for total current and distribute it to each of the other columns:
Now we’re prepared to apply Ohm’s Law (E=IZ) to each of the individual component columns in the table, to determine voltage drops:
Notice something strange here: although our supply voltage is only 120 volts, the voltage across the capacitor is 137.46 volts! How can this be? The answer lies in the interaction between the inductive and capacitive reactances. Expressed as impedances, we can see that the inductor opposes current in a manner precisely opposite that of the capacitor. Expressed in rectangular form, the inductor’s impedance has a positive imaginary term and the capacitor has a negative imaginary term. When these two contrary impedances are added (in series), they tend to cancel each other out! Although they’re still added togetherto produce a sum, that sum is actually less than either of the individual (capacitive or inductive) impedances alone. It is analogous to adding together a positive and a negative (scalar) number: the sum is a quantity less than either one’s individual absolute value.
If the total impedance in a series circuit with both inductive and capacitive elements is less than the impedance of either element separately, then the total current in that circuit must be greater than what it would be with only the inductive or only the capacitive elements there. With this abnormally high current through each of the components, voltages greater than the source voltage may be obtained across some of the individual components! Further consequences of inductors’ and capacitors’ opposite reactances in the same circuit will be explored in the next chapter.
Once you’ve mastered the technique of reducing all component values to impedances (Z), analyzing any AC circuit is only about as difficult as analyzing any DC circuit, except that the quantities dealt with are vector instead of scalar. With the exception of equations dealing with power (P), equations in AC circuits are the same as those in DC circuits, using impedances (Z) instead of resistances (R). Ohm’s Law (E=IZ) still holds true, and so do Kirchhoff’s Voltage and Current Laws.
To demonstrate Kirchhoff’s Voltage Law in an AC circuit, we can look at the answers we derived for component voltage drops in the last circuit. KVL tells us that the algebraic sum of the voltage drops across the resistor, inductor, and capacitor should equal the applied voltage from the source. Even though this may not look like it is true at first sight, a bit of complex number addition proves otherwise:
Aside from a bit of rounding error, the sum of these voltage drops does equal 120 volts. Performed on a calculator (preserving all digits), the answer you will receive should be exactly 120 + j0 volts.
We can also use SPICE to verify our figures for this circuit: (Figure below)
Example series R, L, and C SPICE circuit.
The SPICE simulation shows our hand-calculated results to be accurate.
As you can see, there is little difference between AC circuit analysis and DC circuit analysis, except that all quantities of voltage, current, and resistance (actually, impedance) must be handled in complex rather than scalar form so as to account for phase angle. This is good, since it means all you’ve learned about DC electric circuits applies to what you’re learning here. The only exception to this consistency is the calculation of power, which is so unique that it deserves a chapter devoted to that subject alone.
Review
• Impedances of any kind add in series: ZTotal = Z1 + Z2 + . . . Zn
• Although impedances add in series, the total impedance for a circuit containing both inductance and capacitance may be less than one or more of the individual impedances, because series inductive and capacitive impedances tend to cancel each other out. This may lead to voltage drops across components exceeding the supply voltage!
• All rules and laws of DC circuits apply to AC circuits, so long as values are expressed in complex form rather than scalar. The only exception to this principle is the calculation of power, which is very different for AC. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/05%3A_Reactance_And_Impedance_-_R_L_and_C/5.01%3A_Review_of_R%2C_X%2C_and_Z.txt |
We can take the same components from the series circuit and rearrange them into a parallel configuration for an easy example circuit: (Figure below)
Example R, L, and C parallel circuit.
Impedance in Parallel Components
The fact that these components are connected in parallel instead of series now has absolutely no effect on their individual impedances. So long as the power supply is the same frequency as before, the inductive and capacitive reactances will not have changed at all: (Figure below)
Example R, L, and C parallel circuit with impedances replacing component values.
With all component values expressed as impedances (Z), we can set up an analysis table and proceed as in the last example problem, except this time following the rules of parallel circuits instead of series:
Knowing that voltage is shared equally by all components in a parallel circuit, we can transfer the figure for total voltage to all component columns in the table:
Now, we can apply Ohm’s Law (I=E/Z) vertically in each column to determine current through each component:
Calculation of Total Current and Total Impedance
There are two strategies for calculating total current and total impedance. First, we could calculate total impedance from all the individual impedances in parallel (ZTotal = 1/(1/ZR + 1/ZL + 1/ZC), and then calculate total current by dividing source voltage by total impedance (I=E/Z). However, working through the parallel impedance equation with complex numbers is no easy task, with all the reciprocations (1/Z). This is especially true if you’re unfortunate enough not to have a calculator that handles complex numbers and are forced to do it all by hand (reciprocate the individual impedances in polar form, then convert them all to rectangular form for addition, then convert back to polar form for the final inversion, then invert). The second way to calculate total current and total impedance is to add up all the branch currents to arrive at total current (total current in a parallel circuit—AC or DC—is equal to the sum of the branch currents), then use Ohm’s Law to determine total impedance from total voltage and total current (Z=E/I).
Either method, performed properly, will provide the correct answers. Let’s try analyzing this circuit with SPICE and see what happens: (Figure below)
Example parallel R, L, and C SPICE circuit. Battery symbols are “dummy” voltage sources for SPICE to use as current measurement points. All are set to 0 volts.
5.04: Series-parallel R, L, and C
Now that we’ve seen how series and parallel AC circuit analysis is not fundamentally different than DC circuit analysis, it should come as no surprise that series-parallel analysis would be the same as well, just using complex numbers instead of scalar to represent voltage, current, and impedance.
Take this series-parallel circuit for example: (Figure below)
Example series-parallel R, L, and C circuit.
The first order of business, as usual, is to determine values of impedance (Z) for all components based on the frequency of the AC power source. To do this, we need to first determine values of reactance (X) for all inductors and capacitors, then convert reactance (X) and resistance (R) figures into proper impedance (Z) form:
Now we can set up the initial values in our table:
Being a series-parallel combination circuit, we must reduce it to a total impedance in more than one step. The first step is to combine L and C2 as a series combination of impedances, by adding their impedances together. Then, that impedance will be combined in parallel with the impedance of the resistor, to arrive at another combination of impedances. Finally, that quantity will be added to the impedance of C1 to arrive at the total impedance.
In order that our table may follow all these steps, it will be necessary to add additional columns to it so that each step may be represented. Adding more columns horizontally to the table shown above would be impractical for formatting reasons, so I will place a new row of columns underneath, each column designated by its respective component combination:
Calculating these new (combination) impedances will require complex addition for series combinations, and the “reciprocal” formula for complex impedances in parallel. This time, there is no avoidance of the reciprocal formula: the required figures can be arrived at no other way!
Seeing as how our second table contains a column for “Total,” we can safely discard that column from the first table. This gives us one table with four columns and another table with three columns.
Now that we know the total impedance (818.34 Ω ∠ -58.371o) and the total voltage (120 volts ∠ 0o), we can apply Ohm’s Law (I=E/Z) vertically in the “Total” column to arrive at a figure for total current:
At this point we ask ourselves the question: are there any components or component combinations which share either the total voltage or the total current? In this case, both C1 and the parallel combination R//(L—C2) share the same (total) current, since the total impedance is composed of the two sets of impedances in series. Thus, we can transfer the figure for total current into both columns:
Now, we can calculate voltage drops across C1 and the series-parallel combination of R//(L—C2) using Ohm’s Law (E=IZ) vertically in those table columns:
A quick double-check of our work at this point would be to see whether or not the voltage drops across C1and the series-parallel combination of R//(L—C2) indeed add up to the total. According to Kirchhoff’s Voltage Law, they should!
That last step was merely a precaution. In a problem with as many steps as this one has, there is much opportunity for error. Occasional cross-checks like that one can save a person a lot of work and unnecessary frustration by identifying problems prior to the final step of the problem.
After having solved for voltage drops across C1 and the combination R//(L—C2), we again ask ourselves the question: what other components share the same voltage or current? In this case, the resistor (R) and the combination of the inductor and the second capacitor (L—C2) share the same voltage, because those sets of impedances are in parallel with each other. Therefore, we can transfer the voltage figure just solved for into the columns for R and L—C2:
Now we’re all set for calculating current through the resistor and through the series combination L—C2. All we need to do is apply Ohm’s Law (I=E/Z) vertically in both of those columns:
Another quick double-check of our work at this point would be to see if the current figures for L—C2 and R add up to the total current. According to Kirchhoff’s Current Law, they should:
Since the L and C2 are connected in series, and since we know the current through their series combination impedance, we can distribute that current figure to the L and C2 columns following the rule of series circuits whereby series components share the same current:
With one last step (actually, two calculations), we can complete our analysis table for this circuit. With impedance and current figures in place for L and C2, all we have to do is apply Ohm’s Law (E=IZ) vertically in those two columns to calculate voltage drops.
Now, let’s turn to SPICE for a computer verification of our work:
Example series-parallel R, L, C SPICE circuit.
Each line of the SPICE output listing gives the voltage, voltage phase angle, current, and current phase angle for C1, L, C2, and R, in that order. As you can see, these figures do concur with our hand-calculated figures in the circuit analysis table.
As daunting a task as series-parallel AC circuit analysis may appear, it must be emphasized that there is nothing really new going on here besides the use of complex numbers. Ohm’s Law (in its new form of E=IZ) still holds true, as do the voltage and current Laws of Kirchhoff. While there is more potential for human error in carrying out the necessary complex number calculations, the basic principles and techniques of series-parallel circuit reduction are exactly the same.
Review
• Analysis of series-parallel AC circuits is much the same as series-parallel DC circuits. The only substantive difference is that all figures and calculations are in complex (not scalar) form.
• It is important to remember that before series-parallel reduction (simplification) can begin, you must determine the impedance (Z) of every resistor, inductor, and capacitor. That way, all component values will be expressed in common terms (Z) instead of an incompatible mix of resistance (R), inductance (L), and capacitance (C). | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/05%3A_Reactance_And_Impedance_-_R_L_and_C/5.03%3A_Parallel_R%2C_L%2C_and_C.txt |
What is Conductance?
In the study of DC circuits, the student of electricity comes across a term meaning the opposite of resistance: conductance. It is a useful term when exploring the mathematical formula for parallel resistances: Rparallel = 1 / (1/R1 + 1/R2 + . . . 1/Rn). Unlike resistance, which diminishes as more parallel components are included in the circuit, conductance simply adds. Mathematically, conductance is the reciprocal of resistance, and each 1/R term in the “parallel resistance formula” is actually a conductance.
Whereas the term “resistance” denotes the amount of opposition to flowing electrons in a circuit, “conductance” represents the ease of which electrons may flow. Resistance is the measure of how much a circuit resists current, while conductance is the measure of how much a circuit conducts current. Conductance used to be measured in the unit of mhos, or “ohms” spelled backward. Now, the proper unit of measurement is Siemens. When symbolized in a mathematical formula, the proper letter to use for conductance is “G”.
Reactive components such as inductors and capacitors oppose the flow of electrons with respect to time, rather than with a constant, unchanging friction as resistors do. We call this time-based opposition, reactance, and like resistance, we also measure it in the unit of ohms.
What is Susceptance?
As conductance is the complement of resistance, there is also a complementary expression of reactance, called susceptance. Mathematically, it is equal to 1/X, the reciprocal of reactance. Like conductance, it used to be measured in the unit of mhos, but now is measured in Siemens. Its mathematical symbol is “B”, unfortunately the same symbol used to represent magnetic flux density.
Reactance vs. Susceptance
The terms “reactance” and “susceptance” have a certain linguistic logic to them, just like resistance and conductance. While reactance is the measure of how much a circuit reacts against change in current over time, susceptance is the measure of how much a circuit is susceptible to conducting a changing current.
If one were tasked with determining the total effect of several parallel-connected, pure reactances, one could convert each reactance (X) to a susceptance (B), then add susceptances rather than diminish reactances: Xparallel = 1/(1/X1 + 1/X2 + . . . 1/Xn). Like conductances (G), susceptances (B) add in parallel and diminish in series. Also like conductance, susceptance is a scalar quantity.
When resistive and reactive components are interconnected, their combined effects can no longer be analyzed with scalar quantities of resistance (R) and reactance (X). Likewise, figures of conductance (G) and susceptance (B) are most useful in circuits where the two types of opposition are not mixed, i.e. either a purely resistive (conductive) circuit, or a purely reactive (susceptive) circuit. In order to express and quantify the effects of mixed resistive and reactive components, we had to have a new term: impedance, measured in ohms and symbolized by the letter “Z”.
To be consistent, we need a complementary measure representing the reciprocal of impedance. The name for this measure is admittance. Admittance is measured in (guess what?) the unit of Siemens, and its symbol is “Y”. Like impedance, admittance is a complex quantity rather than scalar. Again, we see a certain logic to the naming of this new term: while impedance is a measure of how much alternating current is impeded in a circuit, admittance is a measure of how much current is admitted.
Given a scientific calculator capable of handling complex number arithmetic in both polar and rectangular forms, you may never have to work with figures of susceptance (B) or admittance (Y). Be aware, though, of their existence and their meanings.
5.06: R, L and C Summary
With the notable exception of calculations for power (P), all AC circuit calculations are based on the same general principles as calculations for DC circuits. The only significant difference is that fact that AC calculations use complex quantities while DC calculations use scalar quantities. Ohm’s Law, Kirchhoff’s Laws, and even the network theorems learned in DC still hold true for AC when voltage, current, and impedance are all expressed with complex numbers. The same troubleshooting strategies applied toward DC circuits also hold for AC, although AC can certainly be more difficult to work with due to phase angles which aren’t registered by a handheld multimeter.
Power is another subject altogether, and will be covered in its own chapter in this book. Because power in a reactive circuit is both absorbed and released—not just dissipated as it is with resistors—its mathematical handling requires a more direct application of trigonometry to solve.
When faced with analyzing an AC circuit, the first step in analysis is to convert all resistor, inductor, and capacitor component values into impedances (Z), based on the frequency of the power source. After that, proceed with the same steps and strategies learned for analyzing DC circuits, using the “new” form of Ohm’s Law: E=IZ ; I=E/Z ; and Z=E/I
Remember that only the calculated figures expressed in <i>polar</i> form apply directly to empirical measurements of voltage and current. Rectangular notation is merely a useful tool for us to add and subtract complex quantities together. Polar notation, where the magnitude (length of vector) directly relates to the magnitude of the voltage or current measured, and the angle directly relates to the phase shift in degrees, is the most practical way to express complex quantities for circuit analysis. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/05%3A_Reactance_And_Impedance_-_R_L_and_C/5.05%3A_Susceptance_and_Admittance.txt |
• 6.1: An Electric Pendulum
Capacitors store energy in the form of an electric field, and electrically manifest that stored energy as a potential: static voltage. Inductors store energy in the form of a magnetic field, and electrically manifest that stored energy as a kinetic motion of electrons: current. When these two types of reactive components are directly connected together, their complementary tendencies to store energy will produce an unusual result.
• 6.2: Simple Parallel (Tank Circuit) Resonance
A condition of resonance will be experienced in a tank circuit when the reactances of the capacitor and inductor are equal to each other. Because inductive reactance increases with increasing frequency and capacitive reactance decreases with increasing frequency, there will only be one frequency where these two reactances will be equal.
• 6.3: Simple Series Resonance
A similar effect happens in series inductive/capacitive circuits. When a state of resonance is reached (capacitive and inductive reactances equal), the two impedances cancel each other out and the total impedance drops to zero!
• 6.4: Applications of Resonance
So far, the phenomenon of resonance appears to be a useless curiosity, or at most a nuisance to be avoided (especially if series resonance makes for a short-circuit across our AC voltage source!). However, this is not the case. Resonance is a very valuable property of reactive AC circuits, employed in a variety of applications.
• 6.5: Resonance in Series-Parallel Circuits
In simple reactive circuits with little or no resistance, the effects of radically altered impedance will manifest at the resonance frequency. In a parallel (tank) LC circuit, this means infinite impedance at resonance. In a series LC circuit, it means zero impedance at resonance
• 6.6: Q Factor and Bandwidth of a Resonant Circuit
The Q, or quality, factor of a resonant circuit is a measure of the “goodness” or quality of a resonant circuit. A higher value for this figure of merit corresponds to a more narrow bandwith, which is desirable in many applications. More formally, Q is the ratio of power stored to power dissipated in the circuit reactance and resistance.
06: Resonance
Capacitors store energy in the form of an electric field, and electrically manifest that stored energy as a potential: static voltage. Inductors store energy in the form of a magnetic field, and electrically manifest that stored energy as a kinetic motion of electrons: current. Capacitors and inductors are flip-sides of the same reactive coin, storing and releasing energy in complementary modes. When these two types of reactive components are directly connected together, their complementary tendencies to store energy will produce an unusual result.
If either the capacitor or inductor starts out in a charged state, the two components will exchange energy between them, back and forth, creating their own AC voltage and current cycles. If we assume that both components are subjected to a sudden application of voltage (say, from a momentarily connected battery), the capacitor will very quickly charge and the inductor will oppose change in current, leaving the capacitor in the charged state and the inductor in the discharged state: (Figure below)
Capacitor charged: voltage at (+) peak, inductor discharged: zero current.
The capacitor will begin to discharge, its voltage decreasing. Meanwhile, the inductor will begin to build up a “charge” in the form of a magnetic field as current increases in the circuit: (Figure below)
Capacitor discharging: voltage decreasing, Inductor charging: current increasing.
The inductor, still charging, will keep electrons flowing in the circuit until the capacitor has been completely discharged, leaving zero voltage across it: (Figure below)
Capacitor fully discharged: zero voltage, inductor fully charged: maximum current.
The inductor will maintain current flow even with no voltage applied. In fact, it will generate a voltage (like a battery) in order to keep current in the same direction. The capacitor, being the recipient of this current, will begin to accumulate a charge in the opposite polarity as before: (Figure below)
Capacitor charging: voltage increasing (in opposite polarity), inductor discharging: current decreasing. When the inductor is finally depleted of its energy reserve and the electrons come to a halt, the capacitor will have reached full (voltage) charge in the opposite polarity as when it started: (Figure below)
Capacitor fully charged: voltage at (-) peak, inductor fully discharged: zero current.
Now we’re at a condition very similar to where we started: the capacitor at full charge and zero current in the circuit. The capacitor, as before, will begin to discharge through the inductor, causing an increase in current (in the opposite direction as before) and a decrease in voltage as it depletes its own energy reserve: (Figure below)
Capacitor discharging: voltage decreasing, inductor charging: current increasing.
Eventually the capacitor will discharge to zero volts, leaving the inductor fully charged with full current through it: (Figure below)
Capacitor fully discharged: zero voltage, inductor fully charged: current at (-) peak. The inductor, desiring to maintain current in the same direction, will act like a source again, generating a voltage like a battery to continue the flow. In doing so, the capacitor will begin to charge up and the current will decrease in magnitude: (Figure below)
Capacitor charging: voltage increasing, inductor discharging: current decreasing.
Eventually the capacitor will become fully charged again as the inductor expends all of its energy reserves trying to maintain current. The voltage will once again be at its positive peak and the current at zero. This completes one full cycle of the energy exchange between the capacitor and inductor: (Figure below)
Capacitor fully charged: voltage at (+) peak, inductor fully discharged: zero current.
This oscillation will continue with steadily decreasing amplitude due to power losses from stray resistances in the circuit, until the process stops altogether. Overall, this behavior is akin to that of a pendulum: as the pendulum mass swings back and forth, there is a transformation of energy taking place from kinetic (motion) to potential (height), in a similar fashion to the way energy is transferred in the capacitor/inductor circuit back and forth in the alternating forms of current (kinetic motion of electrons) and voltage (potential electric energy).
At the peak height of each swing of a pendulum, the mass briefly stops and switches directions. It is at this point that potential energy (height) is at a maximum and kinetic energy (motion) is at zero. As the mass swings back the other way, it passes quickly through a point where the string is pointed straight down. At this point, potential energy (height) is at zero and kinetic energy (motion) is at maximum. Like the circuit, a pendulum’s back-and-forth oscillation will continue with a steadily dampened amplitude, the result of air friction (resistance) dissipating energy. Also like the circuit, the pendulum’s position and velocity measurements trace two sine waves (90 degrees out of phase) over time: (Figure below)
Pendelum transfers energy between kinetic and potential energy as it swings low to high.
In physics, this kind of natural sine-wave oscillation for a mechanical system is called Simple Harmonic Motion (often abbreviated as “SHM”). The same underlying principles govern both the oscillation of a capacitor/inductor circuit and the action of a pendulum, hence the similarity in effect. It is an interesting property of any pendulum that its periodic time is governed by the length of the string holding the mass, and not the weight of the mass itself. That is why a pendulum will keep swinging at the same frequency as the oscillations decrease in amplitude. The oscillation rate is independent of the amount of energy stored in it.
The same is true for the capacitor/inductor circuit. The rate of oscillation is strictly dependent on the sizes of the capacitor and inductor, not on the amount of voltage (or current) at each respective peak in the waves. The ability for such a circuit to store energy in the form of oscillating voltage and current has earned it the name tank circuit. Its property of maintaining a single, natural frequency regardless of how much or little energy is actually being stored in it gives it special significance in electric circuit design.
However, this tendency to oscillate, or resonate, at a particular frequency is not limited to circuits exclusively designed for that purpose. In fact, nearly any AC circuit with a combination of capacitance and inductance (commonly called an “LC circuit”) will tend to manifest unusual effects when the AC power source frequency approaches that natural frequency. This is true regardless of the circuit’s intended purpose.
If the power supply frequency for a circuit exactly matches the natural frequency of the circuit’s LC combination, the circuit is said to be in a state of resonance. The unusual effects will reach maximum in this condition of resonance. For this reason, we need to be able to predict what the resonant frequency will be for various combinations of L and C, and be aware of what the effects of resonance are.
Review
• A capacitor and inductor directly connected together form something called a tank circuit, which oscillates (or resonates) at one particular frequency. At that frequency, energy is alternately shuffled between the capacitor and the inductor in the form of alternating voltage and current 90 degrees out of phase with each other.
• When the power supply frequency for an AC circuit exactly matches that circuit’s natural oscillation frequency as set by the L and C components, a condition of resonance will have been reached. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/06%3A_Resonance/6.01%3A_An_Electric_Pendulum.txt |
Resonance in a Tank Circuit
A condition of resonance will be experienced in a tank circuit (Figure below) when the reactances of the capacitor and inductor are equal to each other. Because inductive reactance increases with increasing frequency and capacitive reactance decreases with increasing frequency, there will only be one frequency where these two reactances will be equal.
Simple parallel resonant circuit (tank circuit).
In the above circuit, we have a 10 µF capacitor and a 100 mH inductor. Since we know the equations for determining the reactance of each at a given frequency, and we’re looking for that point where the two reactances are equal to each other, we can set the two reactance formulae equal to each other and solve for frequency algebraically:
So there we have it: a formula to tell us the resonant frequency of a tank circuit, given the values of inductance (L) in Henrys and capacitance (C) in Farads. Plugging in the values of L and C in our example circuit, we arrive at a resonant frequency of 159.155 Hz.
Calculating Individual Impedances
What happens at resonance is quite interesting. With capacitive and inductive reactances equal to each other, the total impedance increases to infinity, meaning that the tank circuit draws no current from the AC power source! We can calculate the individual impedances of the 10 µF capacitor and the 100 mH inductor and work through the parallel impedance formula to demonstrate this mathematically:
As you might have guessed, I chose these component values to give resonance impedances that were easy to work with (100 Ω even).
Parallel Impedance Formula
Now, we use the parallel impedance formula to see what happens to total \(Z\):
\[ Z_{parallel} = \dfrac{1}{\dfrac{1}{Z_L} + \dfrac{1}{Z_C}}\]
SPICE Simulation Plot
We can’t divide any number by zero and arrive at a meaningful result, but we can say that the result approaches a value of infinity as the two parallel impedances get closer to each other. What this means in practical terms is that, the total impedance of a tank circuit is infinite (behaving as an open circuit) at resonance. We can plot the consequences of this over a wide power supply frequency range with a short SPICE simulation: (Figure below)
Resonant circuit sutitable for SPICE simulation.
The 1 pico-ohm (1 pΩ) resistor is placed in this SPICE analysis to overcome a limitation of SPICE: namely, that it cannot analyze a circuit containing a direct inductor-voltage source loop. (Figure below) A very low resistance value was chosen so as to have minimal effect on circuit behavior.
This SPICE simulation plots circuit current over a frequency range of 100 to 200 Hz in twenty even steps (100 and 200 Hz inclusive). Current magnitude on the graph increases from left to right, while frequency increases from top to bottom. The current in this circuit takes a sharp dip around the analysis point of 157.9 Hz, which is the closest analysis point to our predicted resonance frequency of 159.155 Hz. It is at this point that total current from the power source falls to zero.
The “Nutmeg” Graphical Post-Processor Plot
The plot above is produced from the above spice circuit file ( *.cir), the command (.plot) in the last line producing the text plot on any printer or terminal. A better looking plot is produced by the “nutmeg” graphical post-processor, part of the spice package. The above spice ( *.cir) does not require the plot (.plot) command, though it does no harm. The following commands produce the plot below: (Figure below)
From the nutmeg prompt:
Nutmeg produces plot of current I(v1) for parallel resonant circuit.
Bode Plots
Incidentally, the graph output produced by this SPICE computer analysis is more generally known as a Bode plot. Such graphs plot amplitude or phase shift on one axis and frequency on the other. The steepness of a Bode plot curve characterizes a circuit’s “frequency response,” or how sensitive it is to changes in frequency.
Review
• Resonance occurs when capacitive and inductive reactances are equal to each other.
• For a tank circuit with no resistance (R), resonant frequency can be calculated with the following formula:
• The total impedance of a parallel LC circuit approaches infinity as the power supply frequency approaches resonance.
• A Bode plot is a graph plotting waveform amplitude or phase on one axis and frequency on the other. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/06%3A_Resonance/6.02%3A_Simple_Parallel_%28Tank_Circuit%29_Resonance.txt |
A similar effect happens in series inductive/capacitive circuits. (Figure below) When a state of resonance is reached (capacitive and inductive reactances equal), the two impedances cancel each other out and the total impedance drops to zero!
Simple series resonant circuit.
With the total series impedance equal to 0 Ω at the resonant frequency of 159.155 Hz, the result is a short circuit across the AC power source at resonance. In the circuit drawn above, this would not be good. I’ll add a small resistor (Figure below) in series along with the capacitor and the inductor to keep the maximum circuit current somewhat limited, and perform another SPICE analysis over the same range of frequencies: (Figure below)
Series resonant circuit suitable for SPICE.
Series resonant circuit plot of current I(v1).
As before, circuit current amplitude increases from bottom to top, while frequency increases from left to right. (Figure above) The peak is still seen to be at the plotted frequency point of 157.9 Hz, the closest analyzed point to our predicted resonance point of 159.155 Hz. This would suggest that our resonant frequency formula holds as true for simple series LC circuits as it does for simple parallel LC circuits, which is the case:
A word of caution is in order with series LC resonant circuits: because of the high currents which may be present in a series LC circuit at resonance, it is possible to produce dangerously high voltage drops across the capacitor and the inductor, as each component possesses significant impedance. We can edit the SPICE netlist in the above example to include a plot of voltage across the capacitor and inductor to demonstrate what happens: (Figure below)
Plot of Vc=V(2,3) 70 V peak, VL=v(3) 70 V peak, I=I(V1#branch) 0.532 A peak
According to SPICE, voltage across the capacitor and inductor reach a peak somewhere around 70 volts! This is quite impressive for a power supply that only generates 1 volt. Needless to say, caution is in order when experimenting with circuits such as this. This SPICE voltage is lower than the expected value due to the small (20) number of steps in the AC analysis statement (.ac lin 20 100 200). What is the expected value?
The expected values for capacitor and inductor voltage are 100 V. This voltage will stress these components to that level and they must be rated accordingly. However, these voltages are out of phase and cancel yielding a total voltage across all three components of only 1 V, the applied voltage. The ratio of the capacitor (or inductor) voltage to the applied voltage is the “Q” factor.
Review
• The total impedance of a series LC circuit approaches zero as the power supply frequency approaches resonance.
• The same formula for determining resonant frequency in a simple tank circuit applies to simple series circuits as well.
• Extremely high voltages can be formed across the individual components of series LC circuits at resonance, due to high current flows and substantial individual component impedances.
6.04: Applications of Resonance
So far, the phenomenon of resonance appears to be a useless curiosity, or at most a nuisance to be avoided (especially if series resonance makes for a short-circuit across our AC voltage source!). However, this is not the case. Resonance is a very valuable property of reactive AC circuits, employed in a variety of applications.
One use for resonance is to establish a condition of stable frequency in circuits designed to produce AC signals. Usually, a parallel (tank) circuit is used for this purpose, with the capacitor and inductor directly connected together, exchanging energy between each other. Just as a pendulum can be used to stabilize the frequency of a clock mechanism’s oscillations, so can a tank circuit be used to stabilize the electrical frequency of an AC oscillator circuit. As was noted before, the frequency set by the tank circuit is solely dependent upon the values of L and C, and not on the magnitudes of voltage or current present in the oscillations: (Figure below)
Resonant circuit serves as stable frequency source.
Another use for resonance is in applications where the effects of greatly increased or decreased impedance at a particular frequency is desired. A resonant circuit can be used to “block” (present high impedance toward) a frequency or range of frequencies, thus acting as a sort of frequency “filter” to strain certain frequencies out of a mix of others. In fact, these particular circuits are called filters, and their design constitutes a discipline of study all by itself: (Figure below)
Resonant circuit serves as filter.
In essence, this is how analog radio receiver tuner circuits work to filter, or select, one station frequency out of the mix of different radio station frequency signals intercepted by the antenna.
REVIEW
• Resonance can be employed to maintain AC circuit oscillations at a constant frequency, just as a pendulum can be used to maintain constant oscillation speed in a timekeeping mechanism.
• Resonance can be exploited for its impedance properties: either dramatically increasing or decreasing impedance for certain frequencies. Circuits designed to screen certain frequencies out of a mix of different frequencies are called filters. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/06%3A_Resonance/6.03%3A_Simple_Series_Resonance.txt |
In simple reactive circuits with little or no resistance, the effects of radically altered impedance will manifest at the resonance frequency predicted by the equation given earlier. In a parallel (tank) LC circuit, this means infinite impedance at resonance. In a series LC circuit, it means zero impedance at resonance:
$f_{resonant} = \dfrac{1}{2\pi \sqrt{LC}}$
However, as soon as significant levels of resistance are introduced into most LC circuits, this simple calculation for resonance becomes invalid.
On this page, we’ll take a look at several LC circuits with added resistance, using the same values for capacitance and inductance as before: 10 µF and 100 mH, respectively.
Calculating the Resonant Frequency of a High-Resistance Circuit
According to our simple equation above, the resonant frequency should be 159.155 Hz. Watch, though, where current reaches maximum or minimum in the following SPICE analyses:
Parallel LC circuit with resistance in series with L.
resonant circuit v1 1 0 ac 1 sin c1 1 0 10u r1 1 2 100 l1 2 0 100m .ac lin 20 100 200 .plot ac i(v1) .end
Resistance in series with L produces minimum current at 136.8 Hz instead of calculated 159.2 Hz
Minimum current at 136.8 Hz instead of 159.2 Hz!
Parallel LC with resistance in serieis with C.
Here, an extra resistor (Rbogus) (see the figure below) is necessary to prevent SPICE from encountering trouble in analysis. SPICE can’t handle an inductor connected directly in parallel with any voltage source or any other inductor, so the addition of a series resistor is necessary to “break up” the voltage source/inductor loop that would otherwise be formed. This resistor is chosen to be a very low value for minimum impact on the circuit’s behavior.
Minimum current at roughly 180 Hz instead of 159.2 Hz!
Resistance in series with C shifts minimum current from calculated 159.2 Hz to roughly 180 Hz.
Series LC Circuits
Switching our attention to series LC circuits, (see the figure below) we experiment with placing significant resistances in parallel with either L or C. In the following series circuit examples, a 1 Ω resistor (R1) is placed in series with the inductor and capacitor to limit total current at resonance. The “extra” resistance inserted to influence resonant frequency effects is the 100 Ω resistor, R2. The results are shown in the figure below.
Series LC resonant circuit with resistance in parallel with L.
Maximum current at roughly 178.9 Hz instead of 159.2 Hz!
Series resonant circuit with resistance in parallel with L shifts maximum current from 159.2 Hz to roughly 180 Hz.
And finally, a series LC circuit with the significant resistance in parallel with the capacitor (figure below). The shifted resonance is shown in (Figure below)
Series LC resonant circuit with resistance in parallel with C.
Resistance in parallel with C in series resonant circuit shifts current maximum from calculated 159.2 Hz to about 136.8 Hz.
Antiresonance in LC Circuits
The tendency for added resistance to skew the point at which impedance reaches a maximum or minimum in an LC circuit is called antiresonance. The astute observer will notice a pattern between the four SPICE examples given above, in terms of how resistance affects the resonant peak of a circuit:
Parallel (“tank”) LC circuit:
• R in series with L: resonant frequency shifted down
• R in series with C: resonant frequency shifted up
Series LC circuit:
• R in parallel with L: resonant frequency shifted up
• R in parallel with C: resonant frequency shifted down
Again, this illustrates the complementary nature of capacitors and inductors: how resistance in series with one creates an antiresonance effect equivalent to resistance in parallel with the other. If you look even closer to the four SPICE examples given, you’ll see that the frequencies are shifted by the same amount, and that the shape of the complementary graphs are mirror-images of each other!
Antiresonance is an effect that resonant circuit designers must be aware of. The equations for determining antiresonance “shift” are complex, and will not be covered in this brief lesson. It should suffice the beginning student of electronics to understand that the effect exists, and what its general tendencies are.
The Skin Effect
Added resistance in an LC circuit is no academic matter. While it is possible to manufacture capacitors with negligible unwanted resistances, inductors are typically plagued with substantial amounts of resistance due to the long lengths of wire used in their construction. What is more, the resistance of wire tends to increase as frequency goes up, due to a strange phenomenon known as the skin effect where AC current tends to be excluded from travel through the very center of a wire, thereby reducing the wire’s effective cross-sectional area. Thus, inductors not only have resistance, but changing, frequency-dependent resistance at that.
Added Resistance in Circuits
As if the resistance of an inductor’s wire weren’t enough to cause problems, we also have to contend with the “core losses” of iron-core inductors, which manifest themselves as added resistance in the circuit. Since iron is a conductor of electricity as well as a conductor of magnetic flux, changing flux produced by alternating current through the coil will tend to induce electric currents in the core itself (eddy currents). This effect can be thought of as though the iron core of the transformer were a sort of secondary transformer coil powering a resistive load: the less-than-perfect conductivity of the iron metal. This effects can be minimized with laminated cores, good core design and high-grade materials, but never completely eliminated.
RLC Circuits
One notable exception to the rule of circuit resistance causing a resonant frequency shift is the case of series resistor-inductor-capacitor (“RLC”) circuits. So long as all components are connected in series with each other, the resonant frequency of the circuit will be unaffected by the resistance. (Figure below) The resulting plot is shown in (Figure below).
Series LC with resistance in series.
Maximum current at 159.2 Hz once again!
Resistance in series resonant circuit leaves current maximum at calculated 159.2 Hz, broadening the curve.
Note that the peak of the current graph (Figure below) has not changed from the earlier series LC circuit (the one with the 1 Ω token resistance in it), even though the resistance is now 100 times greater. The only thing that has changed is the “sharpness” of the curve. Obviously, this circuit does not resonate as strongly as one with less series resistance (it is said to be “less selective”), but at least it has the same natural frequency!
Antiresonance’s Dampening Effect
It is noteworthy that antiresonance has the effect of dampening the oscillations of free-running LC circuits such as tank circuits. In the beginning of this chapter we saw how a capacitor and inductor connected directly together would act something like a pendulum, exchanging voltage and current peaks just like a pendulum exchanges kinetic and potential energy. In a perfect tank circuit (no resistance), this oscillation would continue forever, just as a frictionless pendulum would continue to swing at its resonant frequency forever. But frictionless machines are difficult to find in the real world, and so are lossless tank circuits. Energy lost through resistance (or inductor core losses or radiated electromagnetic waves or . . .) in a tank circuit will cause the oscillations to decay in amplitude until they are no more. If enough energy losses are present in a tank circuit, it will fail to resonate at all.
Antiresonance’s dampening effect is more than just a curiosity: it can be used quite effectively to eliminate unwanted oscillations in circuits containing stray inductances and/or capacitances, as almost all circuits do. Take note of the following L/R time delay circuit: (Figure below)
L/R time delay circuit
The idea of this circuit is simple: to “charge” the inductor when the switch is closed. The rate of inductor charging will be set by the ratio L/R, which is the time constant of the circuit in seconds. However, if you were to build such a circuit, you might find unexpected oscillations (AC) of voltage across the inductor when the switch is closed. (Figure below) Why is this? There’s no capacitor in the circuit, so how can we have resonant oscillation with just an inductor, resistor, and battery?
Inductor ringing due to resonance with stray capacitance.
All inductors contain a certain amount of stray capacitance due to turn-to-turn and turn-to-core insulation gaps. Also, the placement of circuit conductors may create stray capacitance. While clean circuit layout is important in eliminating much of this stray capacitance, there will always be some that you cannot eliminate. If this causes resonant problems (unwanted AC oscillations), added resistance may be a way to combat it. If resistor R is large enough, it will cause a condition of antiresonance, dissipating enough energy to prohibit the inductance and stray capacitance from sustaining oscillations for very long.
Interestingly enough, the principle of employing resistance to eliminate unwanted resonance is one frequently used in the design of mechanical systems, where any moving object with mass is a potential resonator. A very common application of this is the use of shock absorbers in automobiles. Without shock absorbers, cars would bounce wildly at their resonant frequency after hitting any bump in the road. The shock absorber’s job is to introduce a strong antiresonant effect by dissipating energy hydraulically (in the same way that a resistor dissipates energy electrically).
Review
• Added resistance to an LC circuit can cause a condition known as antiresonance, where the peak impedance effects happen at frequencies other than that which gives equal capacitive and inductive reactances.
• Resistance inherent in real-world inductors can contribute greatly to conditions of antiresonance. One source of such resistance is the skin effect, caused by the exclusion of AC current from the center of conductors. Another source is that of core losses in iron-core inductors.
• In a simple series LC circuit containing resistance (an “RLC” circuit), resistance does not produce antiresonance. Resonance still occurs when capacitive and inductive reactances are equal. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/06%3A_Resonance/6.05%3A_Resonance_in_Series-Parallel_Circuits.txt |
The Q, or quality, factor of a resonant circuit is a measure of the “goodness” or quality of a resonant circuit. A higher value for this figure of merit corresponds to a more narrow bandwith, which is desirable in many applications. More formally, Q is the ratio of power stored to power dissipated in the circuit reactance and resistance, respectively:
This formula is applicable to series resonant circuits, and also parallel resonant circuits if the resistance is in series with the inductor. This is the case in practical applications, as we are mostly concerned with the resistance of the inductor limiting the Q. Note: Some text may show X and R interchanged in the “Q” formula for a parallel resonant circuit. This is correct for a large value of R in parallel with C and L. Our formula is correct for a small R in series with L.
A practical application of “Q” is that voltage across L or C in a series resonant circuit is Q times total applied voltage. In a parallel resonant circuit, current through L or C is Q times the total applied current.
Series Resonant Circuits
A series resonant circuit looks like a resistance at the resonant frequency. (Figure below) Since the definition of resonance is XL=XC, the reactive components cancel, leaving only the resistance to contribute to the impedance. The impedance is also at a minimum at resonance. (Figure below) Below the resonant frequency, the series resonant circuit looks capacitive since the impedance of the capacitor increases to a value greater than the decreasing inductive reactance, leaving a net capacitive value. Above resonance, the inductive reactance increases, capacitive reactance decreases, leaving a net inductive component.
At resonance the series resonant circuit appears purely resistive. Below resonance it looks capacitive. Above resonance it appears inductive.
Current is maximum at resonance, impedance at a minumum. Current is set by the value of the resistance. Above or below resonance, impedance increases.
Impedance is at a minumum at resonance in a series resonant circuit.
The resonant current peak may be changed by varying the series resistor, which changes the Q. (Figure below) This also affects the broadness of the curve. A low resistance, high Q circuit has a narrow bandwidth, as compared to a high resistance, low Q circuit. Bandwidth in terms of Q and resonant frequency:
A high Q resonant circuit has a narrow bandwidth as compared to a low Q
Bandwidth is measured between the 0.707 current amplitude points. The 0.707 current points correspond to the half power points since P = I2R, (0.707)2 = (0.5). (Figure below)
Bandwidth, Δf is measured between the 70.7% amplitude points of series resonant circuit.
In the Figure above, the 100% current point is 50 mA. The 70.7% level is .707(50 mA)=35.4 mA. The upper and lower band edges read from the curve are 291 Hz for fl and 355 Hz for fh. The bandwidth is 64 Hz, and the half power points are ± 32 Hz of the center resonant frequency:
Since BW = fc/Q:
Parallel Resonant Circuits
The impedance of a parallel resonant circuit is maximum at the resonant frequency. (Figure below) Below the resonant frequency, the parallel resonant circuit looks inductive since the impedance of the inductor is lower, drawing the larger proportion of current. Above resonance, the capacitive reactance decreases, drawing the larger current, thus, taking on a capacitive characteristic.
A parallel resonant circuit is resistive at resonance, inductive below resonance, capacitive above resonance.
Impedance is maximum at resonance in a parallel resonant circuit, but decreases above or below resonance. Voltage is at a peak at resonance since voltage is proportional to impedance (E=IZ). (Figure below)
Parallel resonant circuit: Impedance peaks at resonance.
A low Q due to a high resistance in series with the inductor produces a low peak on a broad response curve for a parallel resonant circuit. (Figure below) conversely, a high Q is due to a low resistance in series with the inductor. This produces a higher peak in the narrower response curve. The high Q is achieved by winding the inductor with larger diameter (smaller gague), lower resistance wire.
Parallel resonant response varies with Q.
The bandwidth of the parallel resonant response curve is measured between the half power points. This corresponds to the 70.7% voltage points since power is proportional to E2. ((0.707)2=0.50) Since voltage is proportional to impedance, we may use the impedance curve. (Figure below)
Bandwidth, Δf is measured between the 70.7% impedance points of a parallel resonant circuit.
In Figure above, the 100% impedance point is 500 Ω. The 70.7% level is 0707(500)=354 Ω. The upper and lower band edges read from the curve are 281 Hz for fl and 343 Hz for fh. The bandwidth is 62 Hz, and the half power points are ± 31 Hz of the center resonant frequency: | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/06%3A_Resonance/6.06%3A_Q_Factor_and_Bandwidth_of_a_Resonant_Circuit.txt |
Depicted below is a very simple AC circuit. If the load resistor’s power dissipation were substantial, we might call this a “power circuit” or “power system” instead of regarding it as just a regular circuit. The distinction between a “power circuit” and a “regular circuit” may seem arbitrary, but the practical concerns are definitely not.
Single phase power system schematic diagram shows little about the wiring of a practical power circuit.
One such concern is the size and cost of wiring necessary to deliver power from the AC source to the load. Normally, we do not give much thought to this type of concern if we’re merely analyzing a circuit for the sake of learning about the laws of electricity. However, in the real world it can be a major concern. If we give the source in the above circuit a voltage value and also give power dissipation values to the two load resistors, we can determine the wiring needs for this particular circuit: (Figure below)
As a practical matter, the wiring for the 20 kW loads at 120 Vac is rather substantial (167 A).
83.33 amps for each load resistor in Figure above adds up to 166.66 amps total circuit current. This is no small amount of current, and would necessitate copper wire conductors of at least 1/0 gage. Such wire is well over 1/4 inch (6 mm) in diameter, weighing over 300 pounds per thousand feet. Bear in mind that copper is not cheap either! It would be in our best interest to find ways to minimize such costs if we were designing a power system with long conductor lengths.
One way to do this would be to increase the voltage of the power source and use loads built to dissipate 10 kW each at this higher voltage. The loads, of course, would have to have greater resistance values to dissipate the same power as before (10 kW each) at a greater voltage than before. The advantage would be less current required, permitting the use of smaller, lighter, and cheaper wire: (Figure below)
Same 10 kW loads at 240 Vac requires less substantial wiring than at 120 Vac (83 A).
Now our total circuit current is 83.33 amps, half of what it was before. We can now use number 4 gage wire, which weighs less than half of what 1/0 gage wire does per unit length. This is a considerable reduction in system cost with no degradation in performance. This is why power distribution system designers elect to transmit electric power using very high voltages (many thousands of volts): to capitalize on the savings realized by the use of smaller, lighter, cheaper wire.
However, this solution is not without disadvantages. Another practical concern with power circuits is the danger of electric shock from high voltages. Again, this is not usually the sort of thing we concentrate on while learning about the laws of electricity, but it is a very valid concern in the real world, especially when large amounts of power are being dealt with. The gain in efficiency realized by stepping up the circuit voltage presents us with increased danger of electric shock. Power distribution companies tackle this problem by stringing their power lines along high poles or towers, and insulating the lines from the supporting structures with large, porcelain insulators.
At the point of use (the electric power customer), there is still the issue of what voltage to use for powering loads. High voltage gives greater system efficiency by means of reduced conductor current, but it might not always be practical to keep power wiring out of reach at the point of use the way it can be elevated out of reach in distribution systems. This tradeoff between efficiency and danger is one that European power system designers have decided to risk, all their households and appliances operating at a nominal voltage of 240 volts instead of 120 volts as it is in North America. That is why tourists from America visiting Europe must carry small step-down transformers for their portable appliances, to step the 240 VAC (volts AC) power down to a more suitable 120 VAC.
Is there any way to realize the advantages of both increased efficiency and reduced safety hazard at the same time? One solution would be to install step-down transformers at the end-point of power use, just as the American tourist must do while in Europe. However, this would be expensive and inconvenient for anything but very small loads (where the transformers can be built cheaply) or very large loads (where the expense of thick copper wires would exceed the expense of a transformer).
An alternative solution would be to use a higher voltage supply to provide power to two lower voltage loads in series. This approach combines the efficiency of a high-voltage system with the safety of a low-voltage system: (Figure below)
Series connected 120 Vac loads, driven by 240 Vac source at 83.3 A total current.
Notice the polarity markings (+ and -) for each voltage shown, as well as the unidirectional arrows for current. For the most part, I’ve avoided labeling “polarities” in the AC circuits we’ve been analyzing, even though the notation is valid to provide a frame of reference for phase. In later sections of this chapter, phase relationships will become very important, so I’m introducing this notation early on in the chapter for your familiarity.
The current through each load is the same as it was in the simple 120 volt circuit, but the currents are not additive because the loads are in series rather than parallel. The voltage across each load is only 120 volts, not 240, so the safety factor is better. Mind you, we still have a full 240 volts across the power system wires, but each load is operating at a reduced voltage. If anyone is going to get shocked, the odds are that it will be from coming into contact with the conductors of a particular load rather than from contact across the main wires of a power system.
There’s only one disadvantage to this design: the consequences of one load failing open, or being turned off (assuming each load has a series on/off switch to interrupt current) are not good. Being a series circuit, if either load were to open, current would stop in the other load as well. For this reason, we need to modify the design a bit: (Figure below)
Addition of neutral conductor allows loads to be individually driven.
Instead of a single 240 volt power supply, we use two 120 volt supplies (in phase with each other!) in series to produce 240 volts, then run a third wire to the connection point between the loads to handle the eventuality of one load opening. This is called a split-phase power system. Three smaller wires are still cheaper than the two wires needed with the simple parallel design, so we’re still ahead on efficiency. The astute observer will note that the neutral wire only has to carry the difference of current between the two loads back to the source. In the above case, with perfectly “balanced” loads consuming equal amounts of power, the neutral wire carries zero current.
Notice how the neutral wire is connected to earth ground at the power supply end. This is a common feature in power systems containing “neutral” wires, since grounding the neutral wire ensures the least possible voltage at any given time between any “hot” wire and earth ground.
An essential component to a split-phase power system is the dual AC voltage source. Fortunately, designing and building one is not difficult. Since most AC systems receive their power from a step-down transformer anyway (stepping voltage down from high distribution levels to a user-level voltage like 120 or 240), that transformer can be built with a center-tapped secondary winding: (Figure below)
American 120/240 Vac power is derived from a center tapped utility transformer.
If the AC power comes directly from a generator (alternator), the coils can be similarly center-tapped for the same effect. The extra expense to include a center-tap connection in a transformer or alternator winding is minimal.
Here is where the (+) and (-) polarity markings really become important. This notation is often used to reference the phasings of multiple AC voltage sources, so it is clear whether they are aiding (“boosting”) each other or opposing (“bucking”) each other. If not for these polarity markings, phase relations between multiple AC sources might be very confusing. Note that the split-phase sources in the schematic (each one 120 volts ∠ 0o), with polarity marks (+) to (-) just like series-aiding batteries can alternatively be represented as such: (Figure below)
Split phase 120/240 Vac source is equivalent to two series aiding 120 Vac sources.
To mathematically calculate voltage between “hot” wires, we must subtract voltages, because their polarity marks show them to be opposed to each other:
If we mark the two sources’ common connection point (the neutral wire) with the same polarity mark (-), we must express their relative phase shifts as being 180o apart. Otherwise, we’d be denoting two voltage sources in direct opposition with each other, which would give 0 volts between the two “hot” conductors. Why am I taking the time to elaborate on polarity marks and phase angles? It will make more sense in the next section!
Power systems in American households and light industry are most often of the split-phase variety, providing so-called 120/240 VAC power. The term “split-phase” merely refers to the split-voltage supply in such a system. In a more general sense, this kind of AC power supply is called single phase because both voltage waveforms are in phase, or in step, with each other.
The term “single phase” is a counterpoint to another kind of power system called “polyphase” which we are about to investigate in detail. Apologies for the long introduction leading up to the title-topic of this chapter. The advantages of polyphase power systems are more obvious if one first has a good understanding of single phase systems.
REVIEW
• Single phase power systems are defined by having an AC source with only one voltage waveform.
• A split-phase power system is one with multiple (in-phase) AC voltage sources connected in series, delivering power to loads at more than one voltage, with more than two wires. They are used primarily to achieve balance between system efficiency (low conductor currents) and safety (low load voltages).
• Split-phase AC sources can be easily created by center-tapping the coil windings of transformers or alternators. | textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/07%3A_Polyphase_AC_Circuits/7.01%3A_Single-Phase_Power_Systems.txt |
Subsets and Splits