University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Virginia Tech
|
DO cyclone only). The difference between these two measurements can be used to assess
what the size selector is removing. The particle concentrations were measured using an
Aerodynamic Particle Size (APS) Model 3321 analyzer (0.5 to 20 µm; TSI Inc., Shoreview,
MN) and a Scanning Mobility Particle Spectrometer (SMPS) Model 3080 (0.01 to 0.6 µm;
TSI Inc., Shoreview, MN). These two instruments ensured that the entire size range of
interest could be covered.
Figure 2.2 shows a schematic of the experimental apparatus used for penetration efficiency
tests. To aerosolize the particle suspension in the chamber, multiple drops of each
nanosphere size standards were placed into a nebulizer jar (BGI Collison Nebulizer, Mesa
Labs, 10 Park Place, Butler, NJ) with about 15 mL of deionized water. The air and particle
suspension were piped to the top of the chamber which was under negative pressure. To limit
the moisture level inside the chamber, silica beads were placed on a grate located just below
the top access port of the chamber. Before testing with size selectors was commenced, the
chamber was allowed to fill and particle concentrations stabilized as confirmed by the APS
and SMPS.
The penetration efficiency testing in the chamber was conducted at the same flow rates as
sampling (i.e., 1.7 LPM for the DPMIs and 2.2 LPM for the SCCs) in order to ensure that
results could be compared to the expected 0.8 µm cut size for a new/clean device. For each
test, alternating measurements were made between the number concentrations of particles
passing through a DO cyclone only (i.e., background) and passing through the DO cyclone
and size selector of interest (SS). The sequence of measurements for one experiment was: 1)
DO cyclone , 2) SS, 3) DO cyclone , 4) SS, 5) DO cyclone where the APS took 20 samples
for 20 seconds each and the SMPS took 3 samples for 135 seconds each. For each sequence,
the average particle number concentration of each size channel was determined. Then the two
SS sequences were divided by the average of the background (DO cyclone) before and after
each SS sequence. The two SS average values were then averaged together to get one particle
number concentration value. This procedure was repeated twice for each size selector tested.
The final average of these two replicate tests are reported here. The d penetration (i.e., cut
50
size) of each SS was determined upon plotting penetration versus particle size.
19
|
Virginia Tech
|
Figure 2.5 New and aged SCCs (top) and DPMIs (bottom) at 5 and 45 total hours of sampling using ELF
pumps.
To better understand the effects of the DPMI aging, the measured flow rates of the Airtecs
and ELF pumps should also be examined (Figure 2.6). While the Airtec does not have a
stated acceptable error tolerance for flow rate, 5% is specified by MSHA for sampling with
the ELF pump (MSHA, 2014). Using a 5% tolerance (shown by the dashed lines in Figure
2.6), the flow rates of Airtecs with dirty DPMIs began significantly decreasing from their set
value of 1.7 LPM on day 4, and the problem became increasingly worse with further aging.
This observation is consistent with the aforementioned flow errors associated with use of the
dirty DPMIs, and it confirms that aging physically restricts flow through the device. The dust
monitoring data (Table 2.2) also provides some indication about potential for particle loading
in the DPMI. While data was only collected on days 7-12, the days with the lowest observed
dust concentrations (days 7 and 8) correlated with smaller deviations from the set flow rate
for the Airtecs using dirty DPMIs.
24
|
Virginia Tech
|
differences between Airtec and ELF samples collected with clean DPMIs are observed
(Figure 2.4).
As suggested earlier, beyond restricting flow, the physical loading of the DPMI with
continued sampling was also expected to reduce its effective cut size. Figure 2.7 shows the
results of the penetration efficiency tests, which confirmed this hypothesis. While a minor
reduction is noted for the 5-hour aged DPMIs (and SCCs), the change is dramatic for the 60-
hour DPMIs. The two devices aged with the Airtecs showed an average d cut size of about
50
0.36 µm; and those aged with the ELF pumps showed an average of 0.39 µm. Taken together
with the observation that, despite maintaining their set flow rate, the ELF pumps with aged
DPMIs collected less EC than those with clean DPMIs, this implies DPM in the study mine
must include a sizeable mass fraction that is greater than about 0.39 µm. Based on the results
from the SCC testing, however, it seems that the fraction of very large DPM agglomerates is
negligible. The average d of the 60-hour aged SCCs was only 0.70 µm – and the aged and
50
clean SCCs generally produced the same EC mass accumulation results.
Figure 2.7 Penetration efficiencies of the Airtec DPMIs and ELF DPMIs and SCCs after 5 and 60 hours of
aging. The clean SCC is reported as having a cut size of about 0.8 µm at a flow of 2.2 LPM (Cauda et al.,
2014).
4. Conclusions
The results of this study offer several important insights into the performance of impactor
and SCC size selectors for DPM sampling. In general, the DPMI can undergo substantial
aging without much effect on DPM sample mass collected – even in a relatively high DPM
environment. This is because, while aging means that the DPMI is becoming physically
clogged, its effective cut size is reduced gradually and most DPM actually occurs far below
the initial cut size (i.e., 0.8 µm).
Two key caveats to the above conclusion must be noted however. First, due to the clogging
effect of particles being loaded into the DPMI, sampling pumps should either automatically
adjust to maintain a desired flow rate, or pump flow rate should be carefully monitored to
allow accurate determination of DPM concentration in the sampled environment. In the latter
26
|
Virginia Tech
|
instance, if the flow rate deviates significantly from the desired value, the cut size of the
DPMI, by design, could be changed. Second, the severity of DPMI aging will surely be
related to not only the relative DPM concentration in the sampled environment, but also the
dust concentration. In dustier environments, the DPMI will age more rapidly. This
underscores the need to use a DO cyclone upstream of the DPMI, as is often recommended
and was done here.
With respect to the SCC, the data reported here indicate that aging of this device is indeed
very slow. Moreover, it produced similar results to new DPMIs. As such, it may be a
favorable alternative to impactor-type size selectors for DPM sampling – particularly for
continuous monitoring applications. In that case and depending on the DPM and dust
conditions in the sampled environment, a routine program might be developed to periodically
clean the SCC in order to maintain its performance.
5. Acknowledgements
The authors would like to thank CDC/NIOSH (Contract Number: 200-2014-59646) for
funding the work. Sincere thanks also to Shawn Vanderslice of NIOSH for sample analysis,
Chelsea Barrett for helping with equipment setup, and all the personnel at the study mine for
their interest and support.
6. References
Abdul-Khalek, I.S., Kittelson, D.B., Graskow, B.R., Wei, Q. and Brear, F. (1998). Diesel
Exhaust Particle Size: Measurement Issues and Trends. SAE Technical Paper Series.
Barrett, C., Gaillard, S., Sarver, E. (2017). Demonstration of continuous monitors for
tracking DPM trends over prolonged periods in an underground mine. Proceedings of the
16th North American Mine Ventilation Symposium, Golden, CO, June 17-22, 2017. (Society
for Mining, Metallurgy, and Exploration, Littleton, CO).
Birch, M. E. (2016). Monitoring of Diesel Particulate Exhaust in the Workplace. NIOSH
Manual of Analytical Methods (NMAM), 5th Edition.
Cantrell, Bruce K. and Watts, Winthrop F. Jr. (1997). Diesel Exhaust Review Aerosol:
Review of Occupational Exposure. Applied Occupational and Environmental Hygiene,
12:12, 1019-1027.
Cantrell, Bruce K. and Rubow, Kenneth L. (1992) Diesel Exhaust Aerosol Measurements in
Underground Metal and Nonmetal Mines. Diesels in underground mines: measurement and
control of particulate emissions proceedings. Bureau of Mines Information and Technology
Transfer Seminar, Minneapolis, MN, September 29-30.
Cauda, Emanuele, Sheehan, Maura, Gussman, Robert, Kenny, Lee; and Volkwein, Jon.
(2014). An Evaluation of Sharp Cut Cyclones for Sampling Diesel Particulate Matter Aerosol
27
|
Virginia Tech
|
Chapter 3. A field study on the possible attachment of DPM and respirable
dust in mining environments
Sallie Gaillarda, Emily Sarvera*, Emanuele Caudab
a Virginia Tech, Department of Mining and Minerals Engineering, Blacksburg, VA 24060, USA
b CDC/NIOSH Office of Mine Safety and Health Research (OMSHR), Pittsburgh, PA 15236, USA
Abstract
Diesel particulate matter (DPM) and mineral dusts are often present together in mine
environments. DPM is generally considered to occur in the submicron range, whereas dust
particles generated in the mine are often supramicron. To avoid analytical interferences when
measuring DPM surrogates (i.e., elemental or total carbon, EC or TC), size selectors are
frequently used to separate sub- and supramicron particles. This approach has previously
been shown to exclude a fraction of the DPM from the sample. The excluded DPM may itself
be oversized, but another possibility is that submicron DPM attaches to respirable dust in the
mine atmosphere. To gain insights into the possible attachment between DPM and dust, a
field study was conducted in an underground stone mine. Submicron, respirable and total
airborne particulate samples were collected in three locations to determine the EC and TC
concentrations by the NIOSH 5040 Standard Method, and carbonate interferences were
addressed by acidification of the samples prior to analysis. Additionally, airborne particulates
were collected onto grids for analysis by transmission electron microscope (TEM) in order
identify specific instances of DPM-dust attachment. A low-flow sampler with an electrostatic
precipitator was used for this purpose to maximize the possibility of collecting particles as
they occurred in the mine atmosphere, rather than forcing them together as an artifact of
sampling.
1. Introduction
Diesel particulate matter (DPM) is a significant occupational health hazard for underground
mine workers (Cantrell and Rubow, 1992; Cantrell and Watts, 1997; Burgarski et al., 2011).
DPM is largely comprised of elemental (EC) and organic carbon (OC), which have been
observed to occur in a relatively constant ratio in mine settings (Kittelson, 1998; Abduhl-
Khalek, 1998; Noll et at., 2007). For this reason, EC and total carbon (TC, taken as the sum
of EC and OC) have been established as suitable surrogates for monitoring DPM (MSHA,
2008). In metal and non-metal mines in the US, MSHA regulates a personal exposure limit of
160 µg/m3 of TC on an 8-hr time-weighted average basis (MSHA, 2008). To measure TC,
filter samples are collected and analyzed by the NIOSH 5040 standard method (MSHA,
2008; Birch, 2016). This is a thermal-optical method that includes a series of temperature
ramps in first a helium atmosphere and then an oxygen atmosphere to drive off the OC and
then EC, respectively; any EC created from thermal decomposition of OC can be corrected
by tracking laser transmittance (i.e., color) changes on the sample filter during analysis
(Birch, 2016).
30
|
Virginia Tech
|
Mine atmospheres generally have significant airborne dust concentrations, which can
interfere with the 5040 analysis (Haney, 2000; Noll et al., 2005; Noll et al., 2013; Vermeulen
et al., 2010). Mineral dusts with carbonate content can be thermally decomposed in the OC
measurement step of the 5040 method, effectively increasing the TC result. Mineral dusts
with refractory minerals may also affect the optical measurements during the analysis
(Haney, 2000; Birch, 2016). To address the problem of carbonate interference, the carbonate
carbon can be removed from the sample by acidification prior to 5040 analysis, or it can be
removed analytically from the 5040 result (Birch, 2016) – though these approaches have not
been practically favored. Another approach, and one which applies to all dust types, is use of
a particle size selector during sampling. Devices such as the DPM impactor (DPMI; SKC,
Eighty Four, PA) are designed to remove larger particles from the sample stream such that
only particles smaller than the device’s cut size (i.e., 0.8 µm at a flow rate of 1.7 LPM) are
deposited on the sample filter. This approach thus takes advantage of the size difference that
generally exists between DPM, which is mostly in the submicron range, and dust, which is
mostly in the supramicron range (Cantrell and Rubow, 1991; Cantrell and Watts, 1997;
Haney, 2000; Noll et al., 2005).
There is of course no perfect cut size to completely segregate one particle type from the
other. It is well established that DPM occurs in two primary modes: the nuclei mode
includes nano-sized (i.e., less than 50 nm) particles of semi-volatile organic compounds, and
the accumulation mode includes spherical soot particles that agglomerate together in globs
and chains, often with adsorbed organics (Kittelson, 1998; Abduhl-Khalek, 1998; Cantrell
and Watts, 1992; Bukowiecki et al., 2002; Pietikainen, 2009). The nuclei mode represents
about 90% of DPM by particle number, while the accumulation mode accounts for most of
the DPM mass (Kittelson, 1998; Abduhl-Khalek, 1998). Only a small fraction of DPM
particles (i.e., 5-20% by mass) are larger than about 1 µm, and these are formed by continued
agglomeration under conditions allowing relatively long residence times with high particle
concentrations (Cantrell and Watts, 1992 Bukowiecki et al., 2002; Chou et at., 2003). On the
other hand, dust generated in many mine environments tends to be mostly greater than about
1 µm (Cantrell and Watts, 1997).
Considering these general size ranges, the size selector approach to DPM sampling has
proven to be quite efficient in limiting mineral dust interferences in 5040 analysis (Haney,
2000; Noll et al., 2005; Noll et al., 2013). However, there is a potential to miss some of the
DPM. Anecdotally, this is evident in the gradual blackening appearance of a DPMI with use,
or the collection of black particulates in the grit pot of a cyclone size selector. Inadvertent
DPM removal when using a size selector can happen if the device by virtue of its design
actually removes some DPM, if the DPM itself is larger than the selector’s cut size, or if the
DPM is effectively larger than the cut size because it is attached to larger particles. Removal
of DPM in the size selector may be an issue, for example, in cases where an impactor is used
excessively. As the impactor begins to load with particulates, including DPM, the effect
becomes increasingly worse because the impactor’s cut size is gradually reduced (see
Chapter 2; Cauda et al., 2014a). Moreover, in cases where tubing must be used between the
size selector and filter cassette (e.g., in real-time monitoring instruments like the FLIR
Airtec), the tubing can also remove some DPM. Nonconductive tubing is often recommended
to minimize this problem (Noll et al, 2013).
31
|
Virginia Tech
|
The case of oversized DPM has also been considered (Cantrell and Rubow, 1991; Haney,
2000; Vermeulen et al., 2010). Vermeulen et al. (2010) conducted extensive work in seven
non-metal mines to collect submicron (i.e., using an impactor), respirable (i.e., using a Dorr-
Oliver cyclone, to remove all particles greater than 10 µm and yield a d cut size of about
50
3.5 µm), and total particulates (i.e., using an open-face cassette). Their results showed that
respirable and total EC were generally similar, but submicron EC was consistently less than
respirable EC. Specifically, submicron EC was 77% of respirable EC, on average, though
this fraction varied between 54-84%. These results indicate that some DPM is practically
missed by typical sampling procedures, and are consistent with others where a similar
experimental approach (i.e., measurements using different sampling trains) was used in the
lab or the field (e.g., Haney, 2000; Noll et al., 2005).
Although exclusion of oversized DPM during sampling has commonly been attributed to the
size of the DPM itself, attachment of DPM and dust could also be a contributing factor. In a
lab study aimed at measuring airborne DPM in the presence of mineral dust particles, Noll et
al. (2013) suggested that coagulation (i.e., attachment) between DPM and dust might cause
less DPM to be collected on sample filters when using an impactor than when not using it. To
specifically investigate this possibility of mixed aerosol exposures, Cauda et al. (2014b)
conducted some lab tests in a calm air chamber containing DPM and mineral dust
concentrations that may be typical of a mine environment. They used a small electrostatic
precipitator (ESPnano; DASH Connector Technology, Spokane, WA) to collect samples of
the airborne particles. The precipitator creates an electric field that charges the particles and
simultaneously deposits them onto a collection plate. This allows determination of whether
particles may interact in the ambient air; if particles deposit together, they likely occurred
together in the air, rather than being forced together during sampling (Miller et al., 2010).
Based on microscopy analysis, Cauda et al. concluded that some DPM and dust particles
were indeed coagulating in the chamber.
Mixed aerosols in general, and the attachment of DPM and dust in particular, have not been
widely investigated. Beyond the possibility for underestimation of DPM by typical sampling
procedures, there may be unique health implications. For example, while some mine dusts
(e.g., limestone) are generally regarded as minor respiratory irritants (NIOSH, 2016), the
synergistic or antagonistic effects of DPM and dust co-exposures or DPM-laden dust
exposures are not known. Indeed, only a few studies exist that specifically examine co-
exposures to mine particulates (e.g., Karagianes et al., 1981).
The purpose of this field study was to explore the possibility of DPM and dust attachment in
an operating stone mine. The experimental design combined two types of sampling and
analysis: collection of submicron, respirable and total particulates for 5040 analysis to
determine effective size fractions of DPM, and collection of ambient particulates for
microscopic analysis to identify instances of attachment.
32
|
Virginia Tech
|
In each sampling location, triplicate samples were collected with each sampling train (i.e., to
yield a total of nine samples). Each setup used an Escort ELF pump (Zefon International Inc.,
Ocala, FL) calibrated to 1.7 LPM, and flow rates were checked before and after sample
collection. All samples were collected on pre-burned TissuequartzTM filters (2500 QAT-UP,
37 mm; Pall Corporation, Port Washington, NY) as required by the 5040 standard method.
Both primary (i.e., particulates) and secondary (i.e., adsorbed OC) filters were collected such
that OC results – and hence TC results – could be corrected to represent particulate OC only
(Birch, 2016).
Figure 3.1 Three sampling trains to collect particulates in different size ranges.
The samples were analyzed using the NIOSH 5040 method. To prepare samples for the
analysis, two punches (1.5 cm2) were taken from each primary filter and a single punch was
taken from each secondary filter. One of the primary filter punches and the secondary filter
punches were analyzed directly using a Sunset Laboratory Inc. Lab OC-EC Aerosol Analyzer
(Tigard, OR). The other primary filter punches were acidified prior to 5040 analysis in order
to remove carbonate carbon per the method described by Birch (2016). Briefly,
approximately 25 mL of 37% HCl was placed into a glass petri dish and placed in the bottom
of a desiccator equipped with a ceramic tray and lid – all of which was located in a fume
hood for proper ventilation. Wetted pH paper (i.e., using deionized water) indicated when the
dessicator environment had sufficient acid vapor (i.e., pH of about 2), and then the filter
punches were put into the dessicator on the ceramic tray. They remained there for about 1
hour, and then they were removed and placed under the fume hood for 1 hour to allow any
remaining acid to volatilize. Care was taken to carefully transfer the punches onto and off of
the tray with clean tweezers, in order to minimize disturbance of the particulates and avoid
contamination between filters.
The 5040 analyzer outputs the amount of OC, EC and TC in each sample as µg/cm2. On the
primary filter punches that were not acidified, the OC (and hence TC) results were not
34
|
Virginia Tech
|
corrected for carbonate carbon (i.e., using its thermogram peak) such that results reported
here include this carbon and therefore appear relatively high. On the acidified punches, the
carbonate carbon was removed by the acid prior to 5040 analysis, so reported OC and TC
have been corrected. As mentioned above, all OC results were corrected using their
corresponding secondary filter such that only particulate OC is reported. In order to calculate
the concentration of each constituent (OC, EC or TC) in the sampled environment (i.e., as
µg/m3), these mass per filter punch area results were converted using the total filter area (i.e.,
8.5 cm2), the sampling flow rate, and the sampling time.
2.3 TEM sample collection and analysis
In each sampling location, ambient particulates were sampled for later analysis by
transmission electron microscopy (TEM). For this, the ESPnano electrostatic precipitator
mentioned above was used. This device operates at a very low flow rate of 100 cc/min and
the sampling time is programed by the user depending on expected particulate concentrations
in the sampling environment (Miller et al., 2010). Preliminary tests indicated that sampling
for several minutes (i.e., about 200s) was sufficient for collecting enough particles for TEM
analysis, but not overloading the TEM grid. Samples were collected onto 400 mesh copper
grids with an ultrathin carbon film on lacey carbon support (Ted Pella Inc., Redding, CA).
Figure 3.2 shows the ESPNano’s sample collection “key” with a TEM grid mounted.
Figure 3.2 ESPnano key with an affixed TEM grid for sample collection.
TEM analysis was conducted on a JEOL 2100 instrument, which is a thermionic emission
microscope with a high resolution pole piece (JEOL Ltd., Akishima, Tokyo, Japan). It is
equipped with a large solid angle EDS detector, manufactured by JEOL. For each sampling
location, the aim was to qualitatively assess the grid samples for particle loading and variety
and then to identify 15-20 particles. Following initial analysis on particles from Location 2, it
was clear that the opportunity to observe DPM and dust attachment was most likely in this
location (i.e., near the crusher) so additional grids – again collected during regular mine
production activities – were analyzed from there. In total, 10 samples were analyzed and
TEM work was limited to about 2 hours on each.
To select particles for identification, the strategy was to begin analysis in the upper left
quadrant of a grid at about 50,000x magnification, and gradually move from left to right and
top to bottom of the sample (Figure 3.3). Then, about three particles were selected for
identification and analysis at higher magnification before moving to another low-
magnification frame of view. Since the objective of this work was to assess the possibility of
DPM and dust attachment, particles suspected to be dust were prioritized for analysis over
35
|
Virginia Tech
|
With respect to DPM, the highest 5040 EC concentrations were observed in Location 1,
followed by Location 2 and then Location 3 (results from acidified samples shown in Figure
3.5). This is consistent with expectations considering the mine activities in the vicinity of
each sampling location. Significant differences could generally not be observed between
5040 EC in the three size ranges sampled. There was substantial variability between the
triplicate results. As this occurred across all size ranges and on both acidified and non-
acidified samples (Figures 3.5 and 3.6), it is most likely related to spatial variability in the
sampled environments rather than factors associated with sampling equipment (e.g., cassette
types, specific pumps) or mine dust interference. Spatial variability is indeed a well-known
issue for collection of airborne particulate samples in mine environments (e.g., see Kissell
and Sacks, 2002 and Vinson et al., 2007).
The fact that total, respirable and submicron EC concentrations were observed to be similar
for all sampling locations indicates that, on a mass basis, the study mine simply does not
have considerable DPM that occurs in the supramicron range. This finding is contrary to
most field reports by others (e.g., Vermeulen et al., 2010), which have shown significant
supramicron DPM in mines (i.e., using EC as a surrogate) – though some other reports have
also shown that most DPM resides in the submicron range (e.g., Maximilien et al., 2017).
Variability in the ratio between submicron and respirable EC (or TC) in different mines is
likely related to specific equipment or operating conditions. Exhaust after-treatment
technologies like DPFs, for instance, are known to effectively change the particle size
distribution of DPM (Lee et al., 2002 and Burgarski, et al., 2009).
Regardless, the results presented here could support respirable (instead of submicron) TC as
a surrogate for DPM in mine environments where the primary mineral dust interference of
concern is from carbonates. In this case, carbonate removal by acidification or analytially by
integration of the carbonate peak on the 5040 thermogram would be necessary. But such an
approach would allow for both removal of carbonate dust interference and accounting for the
DPM that would otherwise be missed by submicron sampling. Furthermore, the results
presented here add to a number of others that suggest use of EC (rather than TC) as a DPM
surrogate in mines, based on the ability to more easily measure EC and the possibility of TC
interferences from non-DPM sourced OC (e.g., see Noll et al., 2007; Noll et al., 2006, Noll et
al,. 2014).
For diesel exhaust exposure assessments in non-metal mines, Vermeulen et al. (2010) also
concluded that respirable EC is an appropriate analytical surrogate. They noted that, due to a
strong observed correlation between respirable and submicron EC in their study mines (i.e.,
median submicron EC to respirable EC ratio of 0.77 with Pearson coefficient of 0.94), either
quantity could be a suitable surrogate. However, the fact that submicron and respirable EC
have a much different ratio in the current study (i.e., they are about equal, but still well
correlated) highlights the favorability of respirable EC – or the need to determine a mine-
specific submicron to respirable ratio if the submicron surrogate is to be used. This way,
supramicron DPM is not missed by sampling efforts, or can at least be accounted for using a
mine-specific correction factor.
38
|
Virginia Tech
|
F -D P V
OUR IMENSIONAL ASSIVE ELOCITY
T L P
OMOGRAPHY OF A ONGWALL ANEL
Kramer Davis Luxbacher
A
BSTRACT
Velocity tomography is a noninvasive technology that can be used to determine
rock mass response to ore removal. Velocity tomography is accomplished by
propagating seismic waves through a rock mass to measure velocity distribution of
the rock mass. Tomograms are created by mapping this velocity distribution. From
the velocity distribution relative stress in the rock mass can be inferred, and this
velocity distribution can be mapped at specific time intervals.
Velocity tomography is an appropriate technology for the study of rockbursts.
Rockbursts are events that occur in underground mines as a result of excessive strain
energy being stored in a rock mass and sometimes culminating in violent failure of
the rock. Rockbursts often involve inundation of broken rock into open areas of the
mine. They pose a considerable risk to miners and can hinder production
substantially.
The rock mass under investigation in this research is the strata surrounding an
underground coal mine in the western United States, utilizing longwall mining. The
mine has experienced rockbursts. Seismic data were collected over a nineteen day
period, from July 20th, 1997 to August 7th, 1997, although only eighteen days were
recorded. Instrumentation consisted of sixteen receivers, mounted on the surface,
approximately 1,200 feet above the longwall panel of interest. The system recorded
and located microseismic events, and utilized them as seismic sources.
The data were analyzed and input into a commercial program that uses an
algorithm known as simultaneous iterative reconstruction technique to generate
tomograms. Eighteen tomograms were generated, one for each day of the study. The
tomograms consistently display a high velocity area along the longwall tailgate that
|
Virginia Tech
|
C 1: I
HAPTER NTRODUCTION
Underground coal mining has seen drastic improvements in productivity with the
advent of longwall mining. However, unique hazards and risks are associated with
underground coal mining, and one of the foremost challenges related to these hazards
is roof characterization and control. Quantifying stress redistribution in a rock mass
is complicated as it relies on both the properties of the rocks that compose the mass
and the structure of the rock mass. Tomographic imaging of stress redistribution
underground has been achieved with some success, but direct application to
production and safety has been limited.
Imaging of stress redistribution in underground coal mines is of paramount
importance in understanding failure mechanisms of mine roof. Roof failure occurs
on all scales from localized falls to large rockbursts. Rockbursts are sudden and
violent failures of overstressed rock ("30 C.F.R. §57.3461" 2005) that result in the
release of large amounts of energy, often causing expulsion of material or airblasts.
Rockbursts pose a considerable danger to miners and can result in extensive
production delays. If the stress redistribution associated with these failures can be
imaged and characterized, this could eventually lead to prediction of rockbursts.
5,054 recordable accidents were reported in underground coal mines in the United
States in 2004. Of these, 1,627, or 32%, were due to fall of roof or rib. Lost time
accidents due to fall of roof or rib in the United States underground coal industry
averaged 57 days lost per miner injured (MSHA 2005a). At 4.04 tons per man hour
(NMA 2005) this equates to a significant loss in production. Additionally, fall of roof
or rib in underground mines accounted for 19% of coal mining fatalities, both
underground and surface, between January 1st, 2001 and November 1st, 2005 (MSHA
2005c). This accident data is summarized in Figure 1.1:
1
|
Virginia Tech
|
Fatalities in Coal Mining
45
40
35
30
25
20
15
10
5
0
2001 2002 2003 2004 2005 YTD*
Year
stnediccA
lataF
fo
.oN
Other Fatalities Fall of Roof or Rib
Figure 1.1. Fatal Accidents in Coal Mining in the United States.*
Velocity tomography has been utilized as a method for inferring stress distribution
in rocks, both in the laboratory and in mines, but has yet to yield comprehensive
understanding of the phenomenon. Monitoring of coal mine roof in the past has
relied on localized measurement of the roof, with inferences being made about the
state of stress over a large area. Tomography has the unique ability to probe and
image a large area of a mine, noninvasively.
Rock mass tomography involves propagating energy through the rock mass, and
measuring quantitative parameters of the energy. In this case, seismic waves are
propagated through the rock mass and their travel times are measured. The velocities
resulting from these travel times are used to infer information about the state of stress
in the rock mass.
The research presented involves generating tomograms for 18 days of production at
an underground longwall coal mine in the western United States. The data utilized
for generating the tomograms were collected from a microseismic event location
* YTD refers to November 1, 2005.
2
|
Virginia Tech
|
C 2: L R
HAPTER ITERATURE EVIEW
Utilization of seismic velocity tomography in underground mines requires an
understanding of rock mechanics and tomography. Stress redistribution in
underground mines results from the removal of ore. Rock and fracture mechanics
provide an understanding of how stress is transferred and how rock fails. However,
application of rock mechanics theory is limited by a priori knowledge of the rock
mass.
Velocity tomography allows for the entire rock mass to be explored in a
noninvasive way by relating p-wave velocity to the elastic properties of rock and
inferring the stress state of the rock mass. However, tomography only provides a
model of the solid being imaged. An understanding of rock mechanics and expected
stress redistribution in an underground mine is essential in evaluating tomograms.
2.1 Failure of Rock
2.1.1 ROCK MECHANICS
A brief review of rock mechanics is instrumental in understanding stress
redistribution in an underground mine. Stress in a mine is caused by various
phenomena. Gravitational stress, tectonic stress, and thermal stress may all be
present in an underground mine (Herget, G. 1988).
Stress is defined as a force over an area. Newton’s first law defines force as equal
to mass multiplied by acceleration. In order to determine gravitational stress in a
mine, the overburden may be divided into columns of material. The mass of the
column multiplied by gravitational acceleration, 9.8 m/s2 (32.2 ft/s2), is the force
acting on the area. The force multiplied by the cross-sectional area of the column
gives the vertical stress component due to gravity, s , in the area. In integral form s
v v
may be defined as follows:
4
|
Virginia Tech
|
Of course, the perfectly elastic requirement is never met insitu, and the horizontal
component is often estimated from site measurements.
A schematic illustrating determination of Poisson’s ratio in a laboratory specimen
is shown in Figure 2.1 for a load P applied parallel to the long axis of a cylindrical
sample. Typical values for Poisson’s ratio are in the range of 0.15 to 0.35 (Herget, G.
1988).
Tectonic stresses result from the movement of plates in the earth, and can vary
regionally. They may create a horizontal stress component, which, when added to the
gravitational horizontal stress component, can exceed the vertical stress component
(Herget, G. 1988). Kelly and Gale found that in many Australian coal mines the
principal horizontal stress component was as much as 2.5 times the vertical stress
component (Kelly, M. and W. Gale 2000).
Stress may also be caused by temperature change in rock. Very deep mines may
experience thermal expansion of rock. According to Herget, the linear coefficient of
thermal expansion in sandstone is 10.8 x 10-8m per 1°C (1988). However, in the
United States, most coal mines are not deep enough to experience substantial thermal
stress.
2.1.2 FRACTURE MECHANICS
Rocks generally exhibit two distinct failure behaviors, elastic-plastic and elastic-
brittle behavior (Blès, J. L. and B. Feuga 1986). In order to define elastic-plastic and
elastic-brittle behavior, strain must first be defined. Strain, e, refers the compression
or the extension of rock resulting from the application of force to the body, divided by
the original dimension of the rock. For example, strain in a cylindrical rock sample
refers to the change in length of the sample when pressure is applied parallel to the
long axis, so:
∆L
( )
ε= unitless [2.3]
L
(Peng, S. S. 1986).
6
|
Virginia Tech
|
In 1980, Hoek and Brown presented the following criteria for peak triaxial strength:
σ =σ + mσσ +sσ2 ( psi)
1 3 3 c c
Where, [2.9]
σ = uniaxial compressive strenth of rock material
c
The factor, m, is dependent on mineralogy, composition and grain size while s is
dependent on tensile strength and degree of fracturization (Hoek, E. and E. T. Brown
1980, Herget, G. 1988, Edelbro, C. 2004). By including s and m Hoek and Brown
were attempting to present a criterion that could be used to characterize a relatively
large rock mass (Hoek, E. and E. T. Brown 1980). However, for a nonhomogenous
rock mass, calculation is still cumbersome.
These criteria were presented to give some concept of the relationships between
stress and failure. The criteria can be applied with success in the laboratory, but in
the field it is more difficult to quantify stress behavior and failure over a large rock
mass, so often various rock mass quality designations are used. However, rock mass
quality designations do not quantify stress behavior or failure characteristics, they
only characterize the state of the rock mass. These designations include the Rock
Quality Designation, RQD (Deere, D. U. 1964), the Rock Mass Quality Index, Q
(Barton, N. 1987), the Rock Mass Rating system, RMR (Bieniaski, Z. T. 1989), the
Coal Mine Roof Rating, CMRR (Molinda, G. M. and C. Mark 1994), and various
other empirical relationships.
In 1921 Griffith proposed a failure envelope for glass. The failure envelope has
little practical application to rock mechanics as it applies strictly to brittle materials
(Edelbro, C. 2004), but his theory did explain fracture propagation in rocks. Griffith
hypothesized that fracture occurs when the maximum tangential stress near the end of
a microfracture exceeds material strength (Griffith, A. A. 1921). The resulting
merger of these microfractures form damage zones and cause stress redistribution,
which can lead to micro- and macro-failure (Young, R. P. and D. S. Collins 2001).
Microfracture opening and closing is instrumental in failure mechanisms of rock. As
a rock is stressed existing microfractures are closed under pressure, and as the rock
approaches failure the microfractures tend to merge, eventually leading to
9
|
Virginia Tech
|
macrocracking and ultimate failure. The stage when the microfractures first close is
denoted by the initial nonlinearity in the stress-strain curve (Thill, R. E. 1973).
2.1.3 ROCK CHARACTERISTICS AND WAVE PROPAGATION
Rocks can be examined noninvasively by propagating ultrasonic or seismic waves
through them, which can provide information about the structure and elastic
properties of a rock (Jackson, M. J. and D. R. Tweeton 1994). This technique can be
applied on a small scale in the laboratory or on a much larger scale in a mine. Wave
propagation through a rock mass is dependent on many characteristics of the rock
mass including rock type, fracture, anisotropy, porosity, stress, and boundary
conditions.
First, a brief review of wave diffraction including Fermat’s principle and Snell’s
Law is essential in understanding the path a wave takes through a rock mass.
Fermat’s principle was originally applied to a beam of light by the French
mathematician Pierre de Fermat in 1657, and states, “The actual path taken between
two points by a beam of light is the one which is traversed in the least time”
(Mahoney, M. S. 1973). Fermat’s principle also applies to sound waves. This
principle is relevant to wave propagation in rock masses, because a rock mass is
rarely homogenous, so the fastest path for a wave is seldom a straight line. The
schematic in Figure 2.4 illustrates the principle, by showing that the fastest path from
A to B is not necessarily the shortest.
10
|
Virginia Tech
|
Figure 2.4. Fermat’s Principle.
Snell’s law, named for Willebrord Snell, who discovered it in 1621, is derived from
Fermat’s Principle and describes the relationship between the angle of incidence and
angle of refraction. Snell’s law is displayed in Equation 2.10:
n sinθ = n sinθ
i i r r
where,
c
n = the index of refraction ⇒
v
( )
c = the speed of sound ft/s [2.10]
( )
v = phase velocity ft/s
( )
θ = the angle of the incident wave from the normal degrees
i
( )
θ = the angle of the refracted wave from the normal degrees
r
When examining seismic waves in a solid, four types of waves are considered: p-
waves, s-waves, Rayleigh waves, and Love waves. P-waves and s-waves are both
body waves; they travel across the medium. Rayleigh waves and Love waves are
surface waves; they only travel along the free surface of an elastic body (Sharma, P.
V. 1986).
P-waves are also known as longitudinal waves or primary waves. As p-waves
propagate a medium the particles of the medium expand and contract. The velocity
of p-waves, V is:
p,
11
|
Virginia Tech
|
1986). P-waves and s-waves are most often used in geophysical velocity
investigations.
In 1923 Adams and Williamson studied the compressibility of a number of rocks
and found that, in general, the compressibility of rocks falls off as pressure is
increased. Compressibility is defined as the reciprocal of the bulk modulus, K
(Goodman, R. E. 1989), displayed in Equation 2.5, so it follows that Young’s
Modulus increases steadily with low pressures and then flattens out with higher
pressures. Recalling Equation 2.11 and assuming that density increase is negligible,
an increase in Young’s Modulus will result in increasing p-wave velocity for lower
pressures, and a plateau at higher pressures.
An increase in p-wave velocity in rocks with application of pressure is attributed to
the closure of cracks and pore space (Wyllie, M. R. J., et al. 1958, Thill, R. E. 1973,
Toksöz, M. N., et al. 1976, Seya, K., et al. 1979, Young, R. P. and S. C. Maxwell
1992). Open pores and microfractures will either diffract seismic waves or cause a
decrease in velocity as the wave travels through the open space. Most rocks show
some decrease in porosity with pressure and an increase in p-wave velocity, with the
exception of some rocks such as dolomite with a high matrix density (Yale, D. P.
1985). Generally, the p-wave velocity gradient is highest at low pressures and then
begins to level out at higher pressures (Prasad, M. and M. H. Manghnani 1997).
Velocity can be used to infer stress distribution, but it is important to note that the
relationship is not linear.
Toksöz indicates that saturation and pore fluid also affect velocity, mainly because
the waves will travel through the medium that fills the pore space. He notes higher
velocities for brine saturation than for gas saturation (1976). Clay content has also
been shown to influence p-wave velocity, but to much less of a degree than porosity
(Tosaya, C. and A. Nur 1982).
13
|
Virginia Tech
|
2.2 Stress Behavior in Underground Mines
2.2.1 FACTORS CONTRIBUTING TO STRESS REDISTRIBUTION
Herget likens stress around an opening to laminar flow distribution as illustrated in
Figure 2.6. He indicates that there will be a crowding of stream lines at the sides of
an obstacle and a slowing in front of and behind the obstacle.
Figure 2.6. Principal Stress Trajectories around an Opening (Herget, G. 1988).
Stress redistribution around excavated areas results in regions of tensile and
compressive stress. The following parameters will influence the excavation damage
zone (Martino, J. B. and N. A. Chandler 2004):
1. In situ stress magnitudes, orientations, and ratios
2. Shape of the tunnel
3. The excavation method (blast or cut)
4. Geology
5. Environmental factors
6. Nearby excavations
2.2.2 ABUTMENT STRESS
Abutment stress is a result of stress redistribution due to the extraction of ore, and
occurs along or near the boundary where material has been removed (Peng, S. S. and
H. S. Chiang 1983). An undisturbed coal seam with competent roof and floor strata
will have a fairly uniform stress distribution. As coal is removed this distribution is
disrupted and the load is either transferred to another intact area or failure occurs. In
14
|
Virginia Tech
|
longwall mining this stress is transferred immediately in front of the face, and to the
sides of the panel (headgate and tailgate). Failure of the roof strata behind the
longwall shields allows for pressure relief.
Very competent strata above a longwall system, such as massive sandstone, may
not cave immediately, contributing to extremely high abutment stress in front of the
face which can result in rockbursts on the face, or damage to shields due to rapid
dynamic loading (Haramy, K. Y., et al. 1988). Kneisley and Haramy indicated that a
fast retreat rate may promote caving so that excess time-dependent loading ahead of
the face may be avoided (Kneisley, R. O. and K. Y. Haramy 1992). Kelly and Gale
also refer to time dependent loading indicating that production delays can lead to
convergence of shields and roof failure at the face (Kelly, M. and W. Gale 2000).
The exact distribution of the abutment load is dependent upon the properties of the
roof strata and the mining geometry, but general stress abutment schematics are
displayed below in Figure 2.7. In Figure 2.7, the red line indicates approximate
relative stress.
15
|
Virginia Tech
|
As illustrated in Figure 2.7, abutment stress is usually larger on the tailgate, if it is
adjacent to a mined out panel. Front abutment pressure is detectable at a lateral
distance of about one times the overburden, but is more evident about 100 feet outby
the face at which point stress starts to increase to its peak at 3 to 20 feet outby the
face. In weak roof, maximum abutment stress along the faceline occurs at the
headgate and tailgate corners, but in more competent roof a peak may occur mid-face
depending upon the face length (Peng, S. S. and H. S. Chiang 1983).
In addition to vertical stress redistribution, joints, faults and horizontal stress
orientation may contribute to larger abutment stresses and more erratic failure. Even
in optimum conditions gob failure is rarely uniform (Maleki, H. 2002).
2.2.3 ROCKBURSTS
Rockbursts, also referred to as bumps, mountain bumps, air blasts, bounces, and
bursts, are violent and sudden ground failures that cause expulsion of material into
excavated areas. They are accompanied by a seismic tremor and the expelled
material can range from less than a ton to hundreds of tons. They may also be
accompanied by floor heave and roof falls (Bräuner, G. 1994).
In coal mines, rockbursts almost always occur where the roof or the roof and floor
are massive and competent. Additionally, they generally occur at depths greater than
1,000 feet, although isolated bursts have been recorded in more shallow mines
(Bräuner, G. 1994, Ellenberger, J.L. and K. A. Heasley 2000).
Rockbursts are the result of stored strain energy being released at the time of rock
failure. The stored strain energy, W , is calculated as follows (Herget, G. 1988):
0
1 Q2
W = 0
0 2 E
where,
⎛inch-pound force⎞
W is per unit volume ⎜ ⎟ [2.13]
0 ⎝ inch3 ⎠
( )
Q =failure strength psi
0
( )
E = Young's Modulus psi
17
|
Virginia Tech
|
Equation 2.13 explains why violent rockbursts often occur in coal seams overlain
by massive sandstone roof. The high failure strength of the sandstone allows for
large values of stored strain energy, so that when failure occurs it may be
catastrophic.
Aside from the danger to miners of flying material, rockbursts pose other serious
hazards. They may be accompanied by an airblast which can disrupt mine
ventilation. Also, in coal mines, the violent failure of the coal seam may propagate
float dust through air, along with a release of methane, which promotes an explosive
atmosphere (Bräuner, G. 1994).
2.3 Stress Analysis in Mines
2.3.1 NUMERICAL METHODS
Numerical stress analysis methods have found widespread application in rock
mechanics and mine stress modeling. There are many types of numerical modeling,
but most routines fit into one of the following classifications: finite element methods;
boundary element methods; discrete element methods; or some combination of the
three.
Finite element methods are continuum methods and can be used for any process
that is governed by a differential equation. The structure, a rock mass, for instance, is
divided into elements that are connected at nodes. The displacement at the nodes can
be calculated and related to strain and stress (Pande, G. N., et al. 1990).
Boundary element methods are also continuum methods, and only the surface of
the body is divided into elements. In a rock mass this would be the outside of the
rock mass, and any interface where material properties change. This method is very
efficient for homogenous and linear elastic behavior in rocks, but is not as flexible as
the finite element method (Pande, G. N., et al. 1990).
The discrete element method involves discretizing the body into elements of
practically any shape and assigning the elements material and contact properties. The
18
|
Virginia Tech
|
contact relationships between elements are monitored with time, and the equations of
dynamic equilibrium for each element are solved to meet the requirements for contact
and boundary conditions. Unlike boundary element and finite element methods,
discrete element is a discontinuum method. This method can be computationally
expensive and requires careful selection of material behavior (Pande, G. N., et al.
1990).
LAMODEL (Laminate Model) is a boundary element, displacement-discontinuity,
routine that calculates stresses and displacements in thin, tabular seams. It simulates
the overburden as a stack of homogenous isotropic layers with the same Poisson’s
ratio and Young’s Modulus, and with frictionless interfaces. LAMODEL is available
through the National Institute for Occupational Safety and Health (NIOSH) and has
been used extensively for stress modeling (Bauer, E. R., et al. 1997, Ellenberger, J.
L., et al. 2003, Zingano, A. C., et al. 2005).
UDEC, FLAC2D, and FLAC3D (Fast Lagrangian Continua Analysis), commercial
programs available through Itasca, have also been implemented in a number of
studies (Badr, S., et al. 2003, Gale, W. J., et al. 2004, Vandergrift, T. L. and J. Garcia
2005, Zingano, A. C., et al. 2005). FLAC is a continuum code utilizing finite
difference formulation. Other codes used to model mine behavior include BESOL
(Karabin, G. J. and M. A. Evanto 1999), MUDEC (Haramy, K. Y., et al. 1988), and
Free Hexagonal Element Method (Procházka, P. P. 2002).
2.3.2 MICROSEISMIC MONITORING
A microseismic event is a subaudible seismic event produced by a rock under
stress, and characterized by short duration and small amplitude (Obert, L. and W. I.
Duvall 1967). Microseismic event locations tend to advance with face advance in
longwall mining, and rate of advance has been found to be related to microseismic
event frequency (Ellenberger, J. L., et al. 2001). Additionally, microseismic event
location tends to coincide with peak abutment stress location, suggesting that the
events are the result of stress redistribution in the mine (Heasley, K. A., et al. 2001).
19
|
Virginia Tech
|
Microseismic event monitoring has been implemented as a predictor of roof failure.
For example, a study Moonee Colliery in Australia revealed an increase in event
frequency prior to roof failure, which allowed for miners to be warned of possible
failure (Iannacchione, A., et al. 2005).
2.4 Tomography
2.4.1 INTRODUCTION
The word tomography is derived from the Greek word tomos, which means to slice
or section (Webster's Third New International Dictionary, Unabridged 2002).
Tomography involves the noninvasive imaging of a solid body; the body can be a
manmade structure, a human body, or a geologic structure. Tomographic imaging
can be conducted on practically any scale. Tomography involves dividing the body in
question into grid cells in a two-dimensional situation or cubes called voxels in a
three-dimensional situation, with the goal of estimating some characteristic value of
the solid for each cell, so that a complete image can be generated (Cox, M. 1999).
2.4.2 APPLICATIONS
Applications of tomography are extensive; tomography is utilized in medicine,
geology, mining, structural investigations, and fluid flow processes.
Tomography has been widely employed in medicine for diagnostic purposes.
Computer axial tomography (CAT scans), nuclear magnetic resonance imaging
(MRI), and positron emission tomography (PET), are all diagnostic technologies that
utilize tomography. Medical tomography allows physicians to noninvasively
examine the inside of the human body and detect anomalies.
Tomography has additional implications for structural imaging and materials
science. For example, x-ray computed tomography has been utilized to determine air
void distribution in asphalt samples, which can be used to characterize roadway wear
(Masad, E., et al. 2002). X-ray tomography has also been used to image flaws in
20
|
Virginia Tech
|
turbine blades (Bronnikov, A. V. and D. Killian 1999), while guided ultrasonic wave
tomography has been used for determination of flaws in composite materials used for
aerospace structures (Leonard, K. R., et al. 2002).
In fluid flow processes, MRI can be used to determine liquid to solid ratios, fluid
flow mechanics, and to image chemical reactions (Hall, L. D. 2005). Additionally,
cross-flow in pipes has been imaged using ultrasonic waves (Rychagov, M. N. and H.
Ermert 1996).
Finally, tomography has been employed extensively in mining and geology.
Hoversten and others used electromagnetic tomography for reservoir visualization
(Hoversten, G. M., et al. 2001), while seismic tomography has been used to image
contaminant flow in sand models (McKenna, J., et al. 2001). Tomography has
applications in exploration as it is useful for imaging geologic structures and ore
bodies (Bellefleur, G. and M. Chouteau 2001). It can be used to detect voids near
active mines in order to avoid unexpected inundation of gas or water (Maillol, J. M.,
et al. 1999). Also, velocity transmission tomography has been used as an indicator of
stress in underground mines (Kormendi, A., et al. 1986, Maxwell, S. C. and R. P.
Young 1996, Friedel, M. J., D. F. Scott. and T. J. Williams 1997, Maxwell, S. C. and
R. P. Young 1998, Westman, E. C. 2004).
2.4.3 VARIATIONS OF TOMOGRAPHY
A number of methods have been employed to collect the data that is used to
generate a tomogram. All methods take advantage of some characteristic of the solid
being imaged, including electrical resistivity and conductivity, flow characteristics,
molecular response to magnetism, and p- and s-wave velocity.
Electrical resistance tomography uses electrodes to measure electrical resistivity.
Resistivity is dependent upon chemical, hydraulic, and thermal components of a solid
(Daily, W., et al. 2004). Electromagnetic tomography has been accomplished through
use of natural electromagnetic waves to characterize fluid flow and content in faults
(Bedrosian, P. A., et al. 2004).
21
|
Virginia Tech
|
Positron emission tomography is a tracer method. It involves using a tracer that
emits positrons as it decays, the positron emission is then measured and imaged. It is
useful for fluid flow in rocks (Degueldre, C., et al. 1996) and has extensive
applications in the medical field, including imaging of the brain (Degueldre, C., et al.
1996, Ishiwata, K., et al. 2005).
Nuclear magnetic resonance imaging, more commonly known as magnetic
resonance imaging, relies on the measurement of relaxation of hydrogen nuclei
contained in water. It requires placing the solid being imaged in a uniform magnetic
field, then applying pulses of electromagnetic energy, which excite hydrogen nuclei.
As the hydrogen nuclei relax back to their normal state they emit energy, which is the
parameter measured to create the tomogram (Baraka-Lokmane, S., et al. 2001).
Travel time tomography can be accomplished using ultrasonic or seismic waves. A
schematic showing approximate frequency intervals of various waves is displayed in
Figure 2.8:
Figure 2.8. Frequency of Waves.
The wavelength used must be small enough to resolve the structure being imaged,
but there must also be adequate energy to propagate the length of the medium being
imaged with sufficient strength. It is generally agreed that resolution of a tomogram
is dependent upon wavelength (Watanabe, T. and K. Sassa 1996, Scott, D. F., et al.
1997, Watanabe, T., et al. 1999). If ray density is sufficient, however, Friedel
indicates that it is possible to resolve to one-half wavelength (Friedel, M. J., et al.
1996), and it has also been found that the 1st Fresnel zone radius is a good order of
magnitude estimator (Williamson, P. R. 1991). A Fresnel zone is the zone of
22
|
Virginia Tech
|
influence around the path of a surface wave, and is dependent upon velocity and path
length (Yoshizawa, K., et al. 2005).
Seismic tomography is generally limited to diffraction, attenuation, p-wave
velocity, s-wave velocity, or some combination of the four. Diffraction of a wave
occurs when a wave meets a discontinuity in a solid and scatters (Schlumberger
2005). Diffraction tomography requires examination of the scattered wave field, and
is often much more computationally expensive and difficult to calculate than travel
time transmission tomography (Williamson, P. R. 1991, Jackson, M. J. and D. R.
Tweeton 1994). Diffraction tomography is based on the wave equation while
transmission tomography is dependent on the ray equation (Lo, T-W., et al. 1988).
Also, diffraction tomography is most useful in relatively homogenous materials
(Goulty, N. R. 1993). A feature of diffraction tomography is that it provides a
qualitative image of velocity contrasts while ray tomography provides a quantitative
image of velocity contrasts (Pratt, R. G. and N. R. Goulty 1991).
To produce a tomogram using attenuation tomography the amplitude at the source
and at the receiver, and the travel time must be measured. The signal decline between
the source and receiver is the attenuation. Weathered and cracked rocks have a
higher attenuation than intact rock. Attenuation imaging is more sensitive to cracking
than velocity imaging (Lockner, D. A., et al. 1977, Watanabe, T. and K. Sassa 1996).
Figure 2.9 illustrates wave attenuation.
Figure 2.9. Attenuation of a Wave (Westman, E. C. 2004)
23
|
Virginia Tech
|
P-wave travel time tomography is often desirable due to the relative simplicity and
accuracy of determining the arrival time. Manthei indicates that determining the s-
wave arrival time is much more uncertain than for p-waves (1997). Yet, the ability of
p-wave travel time tomography to image low velocity anomalies is limited. Wielandt
found that diffracted waves interfere with transmitted waves for low velocity
anomalies, and that the observed travel time is not indicative of the relative velocity
in this instance. He also indicated that p-waves travel around the low-velocity
anomalies (Wielandt, E. 1987). Nolet refers to this as the Wielandt effect (Nolet,
Guust 1987, Jackson, M. J. and D. R. Tweeton 1994). Ivansson describes the use of
damping and synthetic tomography analysis to avoid this problem (Ivansson, S.
1985). P-wave velocity tomograms may image high velocity regions fairly well,
while underestimating the low velocity regions (Jackson, M. J. and D. R. Tweeton
1994, Vasco, D. W., et al. 1995).
2.4.4 INVERSE THEORY
The framework for tomography was established by Radon who proved that an
infinite number of rays passing through a two-dimensional object at an infinite
number of angles could be used to perfectly reconstruct the object (Radon, J. 1917).
The theory also applies to three-dimensional objects. If a finite number of rays are
passed though the object then this is referred to as a sample of the Radon transform.
Deans gives an instructive description of the Radon Transform when he describes
using a probe to characterize some internal aspect of a solid (Deans, S. R. 1983). In
the case of velocity tomography presented in the thesis, the probe is a seismic wave
while the solid is the rock mass under examination. The velocity distribution of the
rock mass is an unknown function, f . After probing the rock mass with seismic
∨
waves, a velocity profile function, f is determined.
Inverse theory entails making inferences about something from measured data
(Menke, W. 1989). Most inverse theory problems are ill-posed. Hadamard defined
the well-posed problem as follows (Hadamard, J. 1902, Hadamard, J. 1952, Yagola,
A. G., et al. 2001, Mosegaard, K. and A. Tarantola 2002):
24
|
Virginia Tech
|
- A solution exists.
- The solution is unique.
- The solution depends on continuous data.
A tomography problem rarely meets these requirements. In fact, inverse problems
often have an infinite number of solutions, with a few solutions that are appropriate in
light of a priori information (Hole, J. 2005). The inverse problem is usually
overdetermined or underdetermined. The overdetermined problem has more data
than unknowns, which does not allow for a unique solution. For example, in the
three-dimensional velocity tomography problem an overdetermined system has more
rays than voxels. Conversely, an underdetermined system would have less rays than
voxels (Tarantola, A. 1987, Menke, W. 1989, Manthei, G. 1997).
Velocity tomography is based on the relationship between time, distance, and
velocity of a ray traveling through a medium:
d
v = →vt = d
t
R1 R
t = ∫ •dl = ∫ p•dl
v
S S
M
t = ∑ p d ( i =1...N)
i j ij
j=1
Where, [2.14]
v = velocity (ft/s)
d = distance (ft)
t = time (sec)
p =slowness (inverse velocity) (s/ft)
N = number of rays
M = number of voxels
The velocity, distance, and time for the length of the entire ray is known, but the
velocity, distance and time for the length of the ray in an individual voxel or grid cell
is not known. The distance in each grid cell can be solved for easily, but the time and
velocity are still unknown. Using inverse theory, the time and velocity can be solved
for as follows
25
|
Virginia Tech
|
T = DP → P = D-1T
where,
T =Travel time per ray matrix, 1 x N
( )
t = travel time of the ith ray
i
[2.15]
D= Distance per ray per grid cell matrix, N x M
( )
d =distance of ith ray in jth pixel
ij
P =Slowness per grid cell matrix, 1 x M
( )
p =slowness of jth pixel
j
Overdetermined and underdetermined problems result in a singular distance matrix,
D, which cannot be inverted (Jackson, M. J. and D. R. Tweeton 1994). A singular
matrix is a matrix with no inverse and a determinant of zero. Take the following
trivial two-dimensional inverse tomography problem as an example:
Table 2.1. Trivial Traveltime Data.
Ray Distance (ft) Arrival Time (ms)
1 20.0 0.073
2 20.6 0.076
3 23.3 0.089
4 22.8 0.084
5 20.6 0.075
6 20.1 0.074
T=DP
⎡0.073⎤ ⎡10.0 10.0 0 0 ⎤
⎢ ⎥ ⎢ ⎥
0.076 10.3 10.3 0 0 ⎡p ⎤
⎢ ⎥ ⎢ ⎥ 1
⎢ ⎥
⎢0.089⎥ ⎢11.7 0 0 11.7⎥ p
⎢ ⎥=⎢ ⎥⎢ 2⎥
0.084 3.1 11.4 8.3 0 ⎢p ⎥
⎢ ⎥ ⎢ ⎥ ⎢ 3⎥
⎢⎢0.075 ⎥⎥ ⎢⎢ 0 4.1 10.3 6.2 ⎥⎥ ⎣p 4⎦
⎢⎣0.074⎥⎦ ⎢⎣ 0 0 10.0 10.0⎥⎦
Figure 2.10. Inverse Tomography Schematic.
The matrix D cannot be inverted for this trivial problem, because it is
overdetermined. In order to manage the dilemma of inverting a singular matrix other
methods have been developed to solve the inverse travel time problem displayed in
Equation 2.15.
26
|
Virginia Tech
|
The algorithms developed to solve the inverse equation include Least Squares,
Damped Least Squares, Singular Value Decomposition (SVD), Algebraic
Reconstruction Technique (ART), Simultaneous Iterative Reconstruction Technique
(SIRT), and Multiplicative Algebraic Reconstruction Technique (MART).
The Least Squares method requires solving the following equation:
P =(DTD)−1DTT [2.16]
(Jackson, M. J. and D. R. Tweeton 1994)
When the matrix DTD is singular, then another method of least squares, damped
least squares is employed, and is of the form:
P =(DTD+λI)−1DTT [2.17]
Where l is a tradeoff parameter that controls the minimization of the data misfit
and the model norm (Aki, K. and W. H. K. Lee 1976, Spakman, W. 1993, Hole, J.
2005) and I is the identity matrix. The data misfit is the difference between the
measured and predicted data, while the norm is a way of sizing and ranking data.
One of the more common norms is L norm which is given by:
2
1
⎡ ⎤ 2
L = d = ∑e 2
2 2 ⎢ ⎣ i ⎥ ⎦ [2.18]
i
where,e is a vector,travel time, in the case of velocity tomography
(Menke, W. 1989).
However, norms can be calculated for L to L . Norms allow for data-weighting so
1 ∞
that a better model fit may be obtained.
Singular value decomposition (Golub, G. H. and C. Reinsch 1971) is an appropriate
algorithm for small problems that requires decomposing the data into eigenvectors,
but for large problems SVD produces large dense matrices which can become
cumbersome (Bording, R. P., et al. 1987).
The iterative techniques, including ART, SIRT, and MART, are useful for
nonlinear problems (Nowack, R. L. and L. W. Braile 1993). These techniques
27
|
Virginia Tech
|
MART is similar to ART, except that instead of perturbing the model by adding or
subtracting from it, a multiplicative correction is made (Stewart, R. R. 1991).
2.4.5 SOURCES OF ERROR
Sources of error in tomographic imaging include measurement error in the
equipment used to collect the data, geometry of the experiment, the inherent geometry
of the velocity contrasts, inaccurate data analysis, and errors in the inversion process.
Experiment geometry plays an important role in constructing an accurate
tomographic image. A well-planned geometry allows for each pixel or voxel to be
well-constrained. Hobro and others suggest generating synthetic tomograms to
analyze proposed geometry prior to experiments (Hobro, J.W.D., et al. 2003). In
reality, it is difficult to achieve optimum geometry in the geophysical context where
very large areas are being measured, especially when passive sources are being used
(Dyer, B. and M. H. Worthington 1988, Meglis, I. L., et al. 2004). A passive source
is a source that is not directly controlled by the researcher. Passive sources may be
microseismic events that may be clustered due to geologic structure, or as a function
of mining geometry. Passive sources do not allow for equal source-receiver spacing.
Scott and others indicated that including the maximum number of intersecting ray
paths at different angles through the body being imaged is of paramount importance
(Scott, D. F., et al. 1997). Achieving this maximum number of intersecting raypaths
and angles can be difficult with passive sources, because they may result in more
biased ray geometry. Watanabe and Sassa have attributed some inconsistencies in
transmission travel time tomography to low density and insufficient angle variation
(1996). Additionally, if the geometry of the velocity anomaly is complex, it is more
difficult to image. Examples of passive source geometry and active source geometry
for a longwall mining section are displayed in Figure 2.12.
29
|
Virginia Tech
|
matrix will be M x M. Each diagonal term of the matrix represents a pixel. The off
diagonal terms represent the relationship between the pixel under study and the other
pixels (Tarantola, A. 1987). Tarantola gives a detailed account of calculation and
analysis of the resolution matrix.
When determining pixel size the goal is to optimize resolution and variance.
Variance is a measure of uncertainty in the pixel, while resolution refers to image
“sharpness.” The schematic in Figure 2.13 represents the tradeoff between resolution
and variance:
Good Variance (many samples) Poor Variance (few samples)
Poor Resolution (large voxel size) Good Resolution (small voxel size)
Figure 2.13. Resolution and Variance of a Tomogram (Menke, W. 1989).
Poor data can result in artifacts in the tomogram. An inaccurate travel time
measurement that is an outlier in the data set can result in unusually large or small
velocity contrasts in the tomogram that are not representative of the true velocity
profile of the solid (Martínez, J. L. F., et al. 2003). Smoothing is one method of
minimizing an artifact. When smoothing a tomogram, a smoothing constraint is
applied to each node in the tomogram. Each node is then weighted according to the
surrounding nodes (Tweeton, D. 2001). The drawback is that a tomogram can be
oversmoothed and legitimate anomalies can be smoothed out of the image.
31
|
Virginia Tech
|
2.5 Previous Tomography Studies
2.5.1 THE VELOCITY-STRESS RELATIONSHIP
Early research involved a series of laboratory tests in which the compressibility of a
number of rock samples under increasing load were examined, and it was determined
that at low pressures the compressibility fell rapidly and then leveled out (Adams, L.
H. and E. D. Williamson 1923).
Nur and Simmons subjected cylindrical samples of Barre granite to uniaxial
loading, and varied the angle of the load. They then measured p- and s- wave
velocity in the sample, and found a clear velocity increase with increased stress.
They also found that the magnitude of the velocity increase was dependent on the
stress direction and direction of compressional wave propagation. The most profound
velocity change occurred when the wave was propagated perpendicular to the load
(Nur, A. and G. Simmons 1969).
Toksöz and others used observed laboratory data, much of it from Nur and
Simmons 1969 study, to model velocity for in situ rock given the parameters of
porosity, saturation, overburden, and pore fluid pressure (Toksöz, M. N., et al. 1976).
Eberhart-Phillips and others measured the effects of pressure, clay content, and
porosity on velocity of 64 sandstone samples, and they also found an exponential
increase in velocity at low pressures that tapered off to a linear increase for higher
pressures (Eberhart-Phillips, D., et al. 1989).
2.5.2 LABORATORY EXPERIMENTS
Scott and others generated tomograms of dry Berea sandstone cores using
ultrasonic waves as the cores underwent indentation testing. They also generated
numerical models of the stress in the cores as they were loaded and found favorable
correlation between the two techniques (1994). Chow and others generated
tomograms with cores of Lac du Bonnet grey granite under uniaxial cyclic loading,
and found that as damage occurred in the sample, low velocity regions corresponded
32
|
Virginia Tech
|
with the damaged zone (1995). Jansen and others imaged thermal stress induced
cracking in a cubic sample of granite (1991).
2.5.3 FIELD EXPERIMENTS
Tomography has been implemented to determine stress in underground mines with
varying degrees of success. Stress distribution in numerous underground structures
has been imaged, including pillars, tunnels, longwall panels, and minewide
tomography.
A sill pillar has been imaged with active sources, and it was concluded that low
velocity areas corresponded with locations of previous rockbursts (Maxwell, S. C.
and R. P. Young 1993). Maxwell and Young also conducted tomographic imaging of
another mine pillar using active source geometry (1996), while Friedel and others
conducted active source imaging of the footprint left by two pillars on the mine floor
(1996). Active source imaging has been implemented for pillar tomography at
Homestake Mine (Scott, D. F., et al. 1999, Scott, D. F., et al. 2004), and Watanabe
and Sassa imaged both a pillar and a triangular area between two drifts (1996).
Manthei used active source geometry to image pillars in a potash mine (1997).
Tunnels have also been studied extensively to determine stress redistribution
around openings. Many of these studies have been conducted at the Underground
Research Lab (URL) in Canada where experiments can be well controlled. Passive
source (Maxwell, S. C. and R. P. Young 1995, Maxwell, S. C. and R. P. Young 1996)
and active source studies (Meglis, I. L., et al. 2004) of tunnels at the URL can be
found in the literature.
The advantage of tunnel and pillar studies is a relatively simple and small scale
geometry, which allows for optimum source and receiver placement. Larger scale
studies are more difficult to design, but have been conducted successfully. Kormendi
and others implemented in seam receivers with active source geometry for a longwall
panel in an underground coal mine. They found that high velocity areas advanced
with the face and were typical of stress redistribution encountered on a longwall
(Kormendi, A., et al. 1986). In 1993, Maxwell and Young used active source
33
|
Virginia Tech
|
6400
6300
6200
6100
6000
5900
5800
5700
5600
5500
20985 15985 10985 5985 985
Distance along Panel (ft.)
36
).tf(
noitavelE
5570
5565
5560
5555
5550
5545
5540
50 6950 6750 6550
anel (ft.)
3.1.2 LONGWALL PANEL GEOMETRY
The mine operates longwall panels that are approximately 18,000 feet long and 815
feet wide. Figure 3.3, shows pillar geometry for the panel of interest. It is interesting
to note that the adjacent panels to both sides of the active panel are unmined.
Typically, the panel on the tailgate side would have been previously mined. All
crosscuts and entries are 20 feet wide. On the tailgate side large pillars are positioned
against the coal block, and these pillars are 200 feet by 95 feet, on centers. Yield
pillars on the tailgate side are located against the adjacent panel, and they are 105 feet
by 55 feet, on centers. On the headgate side the yield pillars are located against the
active panel, and they are 95 feet by 55 feet, on centers. The large pillars on the
headgate side, located against the adjacent panel, are 95 feet by 190 feet, on centers.
Mining is advancing in the southwest direction, and face locations are shown for each
day studied. Over the course of the study the face advanced 1,415 feet, averaging
about 79 feet per day. Tomograms were not generated for July 29th, as data was not
supplied for that day.
).tf(
noitavelE
07/20/97
08/07/97
7950 7750 7550 7350 71
Distance along P
Figure 3.2. Seam Profile of Longwall Panel.
|
Virginia Tech
|
3.1.3 SOURCE AND RECEIVER GEOMETRY
Sixteen geophones were assembled on the surface to monitor and locate
microseismic events. Figure 3.4 displays a plan view of the geophone locations and
the area of the longwall panel that is of interest.
Figure 3.4. Geophone Locations.
The geophones are referred to as receivers while the microseismic events are
referred to as sources, denoting the source of the seismic waves used to explore the
rock mass.
The utilization of microseismic events as sources is an example of passive source
geometry. The advantage of passive source geometry is that a large number of
measurements can be collected at once, and they can be monitored remotely.
However, the drawback is that the experimenter has less control over raypath
geometry. When active source geometry is utilized, the experimenter can position the
sources so that the optimum number of raypaths traverse the area of interest.
3.2 Data Analysis
The tomograms presented in this research are velocity tomograms generated from
travel time and distance data. The arrival times of the p-waves generated by
microseismic events are measured at the geophones located on the surface. Event
38
|
Virginia Tech
|
locations were previously determined (Swanson, P. 2005), so distances between
sources and receivers are known. The data utilized in this research were collected by
NIOSH in 1997.
3.2.1 DATA DESCRIPTION
The raw data received from NIOSH includes 172,632 p-wave arrival times, and
11,696 microseismic events over 18 days, from July 20th, 1997 to August 7th, 1997.
Data were not provided for July 29th, 1997. The data files give the source
coordinates, the microseismic event coordinates, relative magnitude of the events,
traveltime residuals for event location, and the number of stations used to locate the
event. Events that were located by less than 10 stations were not included in the data
file.
3.2.2 DATA RECONCILIATION
First, the data were organized by day. Next, p-wave arrival times were plotted
against raypath distance, excluding arrival times of zero, as shown in Figure 3.5:
Travel Time vs. Distance
0.6
0.5
0.4
0.3
0.2
0.1
0
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Raypath Distance (ft)
)ces(
emiT
levarT
All Events Event 70725106 Event 70725115
Figure 3.5. Travel Time vs. Distance Plot for 07-25-97.
Figure 3.5 displays all raypaths plotted in black, and two microseismic events
plotted in blue and red. In examining the two events, it is obvious that there is a
39
|
Virginia Tech
|
linear correlation between the raypaths for individual events. Most events displayed a
similar relationship. It was determined that an arrival time error was introduced into
either the measurement or the event location that must be corrected for by
normalizing the events. The equation of the line for the set of raypaths that comprise
each event was determined, and the points were then corrected so that each intercept
was equal to zero, assuming that at a distance of zero feet, velocity must equal zero.
Additionally, any velocities higher than 30,000 ft/s were removed. A maximum of
30,000 ft/s was determined from published research, both field and laboratory.
Research in underground mines and on laboratory specimens, including underground
coal mines and sandstone specimens have published maximum p-wave velocity
values ranging from about 7,381 ft/s to 24,934 ft/s (Tosaya, C. and A. Nur 1982,
Kormendi, A., et al. 1986, Maxwell, S. C. and R. P. Young 1993, Jones, S. M. 1995,
Ma, Q., et al. 1995, Maxwell, S. C. and R. P. Young 1996, Manthei, G. 1997, Scott,
D. F., et al. 1997). From this data range, 30,000 ft/s was determined to be an
appropriate maximum velocity. The resulting points are displayed in Figure 3.6.
Travel Time vs. Distance
(Adjusted)
0.7
y = 0.000078x - 0.005775
0.6
R2 = 0.538651
0.5
0.4
0.3
0.2
0.1
0
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Distance (ft)
)ces(
emiT
levarT
2nd Adjustment Event 70725106 Event 70725115 Trendline
Figure 3.6. Adjusted Travel Time vs. Distance Data for 07-25-97.
3.3 Inversion
The data were given as a set of travel times and ray distances. The objective is to
discretize the rock mass surrounding the longwall into voxels, and determine the
40
|
Virginia Tech
|
velocity in each voxel. From this velocity determination relative stress can be
inferred. Because the solution to the problem is not unique, there are more voxels
than measured rays, an iterative inversion technique was employed.
3.3.1 INVERSION TECHNIQUE
After adjusting the arrival times, a text file, including distance and travel time for
each ray, is created for input into GeoTOM. GeoTOM is a commercial package that
inverts for slowness, and then plots velocity in the form of tomograms. GeoTOM
utilizes an iterative technique called SIRT, simultaneous iterative reconstruction
technique, in the inversion process and relies on the following relationship to perform
the inversion:
S 1 S
t = ∫ •dl = ∫ p•dl
v
R R
M
t = ∑p d
i j ij
j=1
T = DP
P = D−1P
T'= DP'
dT =T−T'
dP'= DTdT
P''= P'+dP'
where,
t = travel time (sec)
T = travel time matrix, 1xN, where N is the number of rays measured.
v = velocity (ft/s)
p =slowness, inverse velocity (s/ft)
P =slowness matrix, 1xM, where M is the number of voxels in the tomgram.
d = raypath distance (ft)
D=distance matrix, NxM, the distance of the ith ray in the jth voxel. [3.1]
Prime notation refers to the initial model.
41
|
Virginia Tech
|
As illustrated in Equation 3.1, the process is based on the assumption that the line
integral from the source to the receiver of the slowness is equal to the traveltime. In
applying this relationship to each voxel the matrix relationships are formed.
3.3.2 INPUT PARAMETERS
A voxel size of 50 feet by 50 feet by 50 feet was input into GeoTOM. This size
was determined to be sufficiently small to ascertain the general stress trend, but
sufficiently large that low and high velocity artifacts would not disrupt interpretation
of the tomogram.
GeoTOM allows a number of other input parameters including an initial velocity
model, anisotropy, smoothing, and the number of curved and straight ray iterations to
perform.
The initial velocity model allows for GeoTOM to perform the inversion more
efficiently and accurately. SIRT is an iterative technique, so the algorithm must have
an initial velocity value to perturb for the first iteration. The initial velocity model
was provided with the raw data from NIOSH, and is displayed in Figure 3.7. The
approximate location of the Wadge coal seam is displayed in black. The velocity
layers are also tabulated in Table 3.1.
42
|
Virginia Tech
|
(1999). The magnitude of anisotropy was determined experimentally. The data from
August 6th, 1997 were inverted five times, at 30 iterations each, with anisotropy
magnitudes of 0.8, 0.9, 1.0, 1.1, and 1.2, and with all other parameters being held
constant. GeoTOM outputs a file of travel time residuals when an inversion is
performed. These residuals were examined to determine the optimum anisotropy.
The graph in Figure 3.8 summarizes the anisotropy test.
Anisotropy Determination
0.06150
0.06100
0.06050
0.06000
0.05950
0.05900
0.05850
0.05800
0.05750
0.05700
0.8 0.9 1.0 1.1 1.2
Anisotropy Magnitude
)sdnoces(
laudiseR
emiT
levarT
SMR
Figure 3.8. Experimental Determination of Anisotropy Magnitude.
As evidenced in Figure 3.10, the anisotropy vector of 1.1 produced minimum root-
mean-square residuals. Root-mean-square residuals are calculated as follows:
1 n
RMS = ∑ x2
N i
[3.2]
i=1
where x is the residual for the ith ray in seconds.
i
Next, the appropriate ray assumption must be determined. GeoTOM will calculate
raypaths based on a straight ray assumption or a curved ray assumption. The straight
ray calculation is simply the straight line distance between the source and the
receiver, while the curved ray calculation allows for ray bending according to Snell’s
Law. Figure 3.9 displays RMS residuals for the straight ray assumption and curved
ray assumption, illustrating that the residuals are smaller for the straight ray
assumption.
44
|
Virginia Tech
|
Straight and Curved Ray RMS Residuals
0.07500
laudise
0.07000
0.06500
0.06000
0.05500
0.05000
0 5 10 15 20
Iteration
R
emiT
levarT
SMR
)sdnoces(
Curved Ray Assumption Straight Ray Assumption
Figure 3.9. RMS Residuals for Straight and Curved rays for August 6th, 1997.
However, Snell’s Law implies that for the layered initial velocity model the straight
ray assumption is not valid. Additionally, sum residuals are significantly smaller for
the curved ray assumption, as illustrated in Figure 3.10. Sum residuals are simply the
sum of the travel time residuals for each ray in the iteration. Sum residuals are not a
measure of the magnitude of the residuals, but rather of their distribution about zero.
The higher sum residuals for the straight ray assumption indicate that the straight ray
algorithm consistently underestimates the raypath length.
Straight and Curved Ray Sum Residuals
250
200
laudise
150
100
50
0
0 5 10 15 20
Iteration
R emiT
levarT
muS
)sdnoces(
Curved Ray Assumption Straight Ray Assumption
Figure 3.10. Sum Residuals for Straight and Curved Rays for August 6th, 1997.
Clement and Knoll ran synthetic tomograms for cross borehole data with straight
and curved ray algorithms and found similar results in their tests; the RMS error was
smaller for the straight ray algorithm than for the curved ray algorithm. They still
45
|
Virginia Tech
|
3.4 Three-Dimensional Modeling
GeoTOM creates three-dimensional models, but only allows for one slice to be
viewed at a time. RockWorks, a commercial geotechnical package, allows for the
GeoTOM tomograms to be viewed as a solid model. The model can be sliced,
rotated, and filtered. GeoTOM outputs a .dat file, which includes a node location, the
number of rays passing through the node, and the velocity at the node. This file is
imported into RockWorks. The file is filtered so that only nodes with at least 5 rays
are included in the model. Filtering nodes with less then 5 rays helps avoid artifacts
in the model. For example, if a node has only one ray passing through it and that ray
velocity is unreasonable, then the node will show an unusually large or small value,
which appears as an artifact on the tomogram. Nodes with at least 5 rays passing
through them are well-constrained and less likely to produce artifacts.
RockWorks then creates a solid model using user specified geometry. Geometry
and voxel dimensions are the same as specified above in GeoTOM input parameters.
An isotropic inverse distance algorithm is used to extrapolate between the nodes and
generate a solid model. The isotropic inverse distance algorithm assigns node values
based on the distance a node is from known node values (Rockworks Manual 2002).
Known node values refer to the nodes that were specified in the GeoTOM file.
Displayed in Figure 3.14 are a sliced solid model, a solid model, and a filtered solid
model generated by RockWorks.
49
|
Virginia Tech
|
LAMODEL was written specifically for tabular deposits, and treats the rock mass as a
series of frictionless plates.
LAMODEL requires a number of input parameters in order to model the behavior of
the coal seam and the overburden material. Since no testing of material was
incorporated into this research these parameters were determined using published
values of similar material.
First, the overburden parameters including Poisson’s ratio, elastic modulus,
lamination layer thickness, and vertical stress gradient are input. Since the coal seam
is overlain and underlain by massive competent sandstone, sandstone is used as the
overburden material. Approximately 15 feet above the seam is a 25 foot thick
sandstone formation and the underlying seam is Troutcreek Sandstone. Also,
approximately 700 to 750 feet above the seam is the 200 foot thick Twentymile
sandstone unit. Previous research in the mine gives Poisson’s ratio and the elastic
modulus for the sandstone. A lamination layer thickness of 25 ft was determined to
represent the sandstone immediately over the seam. The vertical stress gradient was
also taken from previously published research. The parameters for the overburden
are tabulated below:
Table 3.3. Overburden Input Parameters.
Parameter Units Value
Poisson's Ratio - 0.31
Elastic Modulus (E) psi 3210000
Lamination Layer ft 25
Vertical Stress Gradient psi/ft 0.95376
Next, LAMODEL prompts for coal properties, including the coal modulus, plastic
modulus, coal strength, and Poisson’s ratio. The default values listed in Table 3.4
were accepted. Coal properties can be difficult to determine and may vary
substantially among samples.
52
|
Virginia Tech
|
Table 3.4. Coal Input Parameters.
Parameter Units Value
Coal Modulus psi 3000000
Plastic Modulus psi 0
Coal Strength psi 900
Poisson's Ratio - 0.33
Finally, gob properties are established including, an initial gob modulus, upper
limit stress, gob height factor, gob load ratio, and a final modulus, calculated by the
program. The upper limit stress is recommended to be 2 to 4 times the virgin stress to
keep the model stable, and 4,000 psi is consistent with experimental data for gob
consisting of strong sandstone. The program recommends a gob height factor of one
to six. The gob load ratio is the average gob load over the maximum gob load –
values of 0.5 to 0.9 are recommended. Gob parameters are tabulated in Table 3.5:
Table 3.5. Gob Input Parameters.
Parameter Units Value
Initial Gob Modulus psi 500
Upper Limit Stress psi 4000
Gob Height Factor - 2
Gob Load Ratio - 0.7
Final Modulus - 13005.3
LAMODEL allows for the geometry of the longwall panel, as shown in Figure 3.3,
to be input into the program. Additionally, LAMODEL will run the routine in steps.
The steps allow for each of the 18 face locations to be read into the program. Each
step is one of the 18 days, and LAMODEL takes into account the material removed
when calculating stress redistribution.
53
|
Virginia Tech
|
In comparing the two plots it is obvious that 07-22-97 exhibits more scatter than
08-01-97 with R2 values of 0.3281 and 0.5847, respectively. Next, in adjusting the
events 70.70% of the points recorded on 07-22-97 were removed while only 3.09% of
the points recorded on 08-01-97 were removed. This would indicate that each
individual event on 07-22-97 showed an unusual amount of scatter. From examining
the scatter plots, the data for 08-01-97 will produce a more accurate tomogram than
the data for 07-22-97.
The next parameter to examine is the RMS residual. The RMS residual gives an
idea of how well the model, the tomogram, fits the data, the adjusted distance and
travel time points. The RMS residuals for the tenth iteration for 07-22-97 and 08-01-
97 are 0.1459 and 0.03818, respectively. This would indicate that the model for 08-
01-97 better fits the data.
The third, and most important parameter, is the ray density. The more rays that
traverse an area, the better constrained the area. The best way to image ray density is
to plot ray density on the same scale as the tomogram under examination. Ray
density plots for 07-22-97 and 08-01-97 at seam level, Z = 5,500 feet, are displayed
below in Figure 4.2.
Rays/Node
07-22-97 08-01-97
Figure 4.2. Ray Density Plots for 07-22-97 and 08-01-97.
In examining the ray density plots it is evident that there is more coverage on 08-
01-97, as compared to 07-22-97.
55
|
Virginia Tech
|
4.2 Microseismic Event Correlation
4.2.1 MICROSEISMIC EVENT LOCATIONS AND FREQUENCY
Microseismic events are often a function of rate of advance, although other factors
also influence frequency. Figure 4.8 displays a graph of face advance and
microseismic activity. Although the relationship is not directly proportional, there is
a general increase in microseismic activity with increased production.
Face Advance & Microseismic Events
180
160
140
120
100
80
60
40
20
0
7/20/19 79 /7 21/19 79 /7 22/199 7/7 23/19 79 /7 24/19 79 /7 25/19 79 /7 26/19 79 /7 27/19 79 /7 28/199 7/7 30/19 79 /7 31/199 87 /1/199 87 /2/199 87 /3/199 87 /4/1997 8/5/199 87 /6/199 87 /7/1997
Date
)tf(
ecnavdA
1600
1400
1200
1000
800
600
400
200
0
stneve
fo
rebmuN
Advance No. of Events
Figure 4.8. Microseismic Event Frequency and Face Advance.
Heasley found, when studying seismicity around longwall panels, that most events
occurred immediately in front of the face, clustered near the headgate (2001).
Similarly, most events were clustered immediately in front of the longwall face for
this dataset, but events were also dispersed along the tailgate side behind the face.
Figure 4.9 displays seismicity in relation to the longwall panel for 07-26-97, 07-27-
97. and 07-28-97, days that are generally representative of the dataset. The events are
plotted as spheres, and they are sized according to relative magnitude.
67
|
Virginia Tech
|
horizontal stress at the site is not taken into account. Also, LAMODEL does not
model any geological anomalies such as faulting. Horizontal stress and geological
anomalies contribute to the image generated in velocity tomograms.
Next, pixel size plays an important role. The velocity tomograms have a voxel size
of 50 feet by 50 feet by 50 feet. Entry width on the panel is 20 feet, so on the
tomograms the highly stressed pillars along the tailgate should appear to be smeared
on the velocity tomogram. A high stress region exists immediately in front of the
face in the LAMODEL plots that is not obvious on the velocity tomograms. This
region is relatively narrow and may not be not be obvious because the length of the
seismic waves was too long. Also, the nature of the velocity-stress relationship
indicates that a velocity tomogram will not produce the same image as a stress model.
Figure 4.12 displays a velocity-pressure curve determined in the laboratory for
sandstone:
Berea Sandstone
13500
13000
12500
12000
11500
11000
10500
10000
0 2000 4000 6000 8000 10000 12000
Hydrostatic Pressure (psi)
)s/tf(
yticoleV
evaw-P
Figure 4.12. P-wave Velocity vs. Pressure for Berea Sandstone (King, M. S. 1966).
The velocity stress relationship is almost linear at low pressures, leveling out at
higher pressures. This relationship explains the relatively larger high velocity area
seen in the velocity tomograms, as compared to the stress plot. Above a certain stress
level, velocity will change very little, and a velocity tomogram will not be able to
differentiate stress changes above this level.
71
|
Virginia Tech
|
C 5: C
HAPTER ONCLUSIONS
Velocity tomograms of an underground coal mine implementing longwall mining
produced reasonable images of velocity distribution. Inferred stress distribution
correlates well with numerical modeling of the longwall panel.
The mine under study has reported three rockbursts, referred to as bounces at this
mine, to MSHA in the past three years. None of the bounces caused injury, but all
three occurred on the longwall tailgate and resulted in tailgate blockage, impeding
travel. The velocity tomograms generated in this research consistently indicate a high
stress area along the tailgate, advancing with the face. This high stress area is
confirmed by the bounces reported by the mine, by numerical modeling through
LAMODEL, and by microseismic activity in the area.
With the exception of tomograms produced for 07-22-97 and 07-30-97, velocity
tomography produced consistent images of the floor, seam, and roof of the longwall
panel. The anomalous tomograms appear to be due to errors in data measurement and
filtering.
The consistency of the tomograms and the high velocity area along the tailgate,
which has historically stored excessive strain energy, in addition to the correlation
found with LAMODEL indicate that velocity tomography is an excellent technology
for studying rockbursts.
The passive source geometry implemented in this research is not ideal for
producing tomograms, however. Implementation of an active source with receivers
closer to the seam level would improve the tomograms drastically, and allow for more
detailed study near the panel. Use of the active source geometry in tandem with
passive source geometry would be especially useful, as the passive source geometry
provided important microseismic information. For example, the consistent
microseimic events in the floor strata inby the longwall face confirm that strain
energy is being stored in the floor strata on the tailgate side.
73
|
Virginia Tech
|
Additionally, insitu stress measurements of the panel and laboratory testing of the
roof and floor strata to determine p-wave velocity under pressure would allow for the
velocity-stress relationship to be further explored. If the velocity-stress relationship
for a mine is well defined, more information about the stress state can be inferred
from velocity tomography.
In addition to the study of the strata, personal observation could provide important
information about rockbursts and respective change in velocity tomography.
Rockbursts are only reported to MSHA if they cause harm to persons, impede
ventilation, or impede travel. Many small bumps, although unreported, are noticed by
people working underground, and recording their time and location would allow for
changes in velocity tomograms to be explored.
Velocity tomography proves to be a useful tool for examining stress redistribution
in an underground longwall mine in response to coal removal. This technology
provided consistent images that correlate well with numerical modeling, microseismic
events, and mine experience, which indicate that the tailgate is a high stress zone,
prone to rockbursts. Velocity tomography imaged high velocities in rockburst prone
areas, and can be used to further study rockburst phenomena.
74
|
Virginia Tech
|
Improvement of Ground-Fault Relaying
Selectivity through the Application of
Directional Relays to High-Voltage
Longwall Mining Systems
Joseph J. Basar
ABSTRACT
The continuing trend toward larger longwall mining systems has resulted in the
utilization of higher system voltages. The increase in system voltage levels has caused
the industry to face complexities not experienced with the lower-voltage systems. One
such complexity arises from the larger system capacitance that results from the outby
configuration commonly used on 4,160-V longwall power systems. Simulations show
that during a line-to-ground fault, the larger system capacitance can cause a situation
where the ground current sensed by the ground-fault relays in unfaulted circuits is greater
than the mandated ground-fault relay pick-up setting. Simulations show that ground-fault
relaying selectivity is potentially lost as a result of this situation. Two alternatives were
identified which could improve ground-fault relaying selectivity. They are: the
application of a directional relaying scheme and increasing the ground-fault relay pick-up
setting. It was determined that directional relays have an application to high-voltage
longwall power systems as the ground current sensed by the relay in the unfaulted circuits
is out of phase with the ground-fault current sensed by the relay in the faulted circuit.
Furthermore, it was determined that raising the ground-fault relay pick-up setting by a
factor of eight would also improve ground-fault relaying selectivity. A safety analysis
considering the potential for electrocution and the power dissipated by the maximum
fault resistance showed that increasing the pick-up setting by a factor of eight would have
no detriment to safety. Therefore, either method would improve ground-fault relaying
selectivity on high-voltage longwall mining systems, yet because of the escalating size of
longwall systems, a directional relaying scheme is a longer term solution.
ii
|
Virginia Tech
|
ACKNOWLEDGEMENTS
I am very grateful to everyone who provided me support in accomplishing this M.S.
thesis. Most notably, I would like to thank both my family in New York and my
extended family in Richmond.
My profound gratitude goes to my advisor Dr. Thomas Novak for sharing with me his
vast knowledge of mine power systems and for presenting me with the opportunity to
write this thesis.
I would like to thank my committee members: Dr. Claudio Faria, Dr. Jeffrey Kohler, Dr.
Antonio Nieto, Dr. Gerald Reid, and Dr. Joseph Sottile for kindly serving on my
committee.
A deserving mention goes to my mentors in industry who provide me support through
their informed advice. These include, but of course are not limited to: Dr. S.C.
Suboleski, Mr. P.S. Barbery, Mr. E.M. Massey, and Mr. R.C. Mullins.
I would also like to thank the Ladies of WAIMME for their continued support throughout
the years, particularly Mrs. S. Harwood, Mrs. L. Hull, Mrs. V. Karmis and Mrs. P.
McWhorter.
Finally, for service above and beyond the call of duty - I recognize Ms. G. Hambsch.
iv
|
Virginia Tech
|
Chapter 1. Introduction
1.1 General
A notable increase in the voltage level supplied to longwall mining systems has occurred
over the past two decades. Longwalls utilizing 1,000 V or less have been phased out
during this period by systems utilizing 2,400 V or 4,160 V. Prior to 1986, the maximum
voltage used on longwall mining systems in the United States was 1,000 V. Figure 1
shows the combined trends of the utilization voltages from 1986 to 2003.
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
1
.
sllawgnoL
gnitarepO
fo
%
5891 6891 7891 8891 9891 0991 1991 2991 3991 4991 5991 6991 7991 8991 9991 0002 1002 2002 3002
≤ 1,000V 2,400V 4,160V
Fig. 1. Combined trends of utilization voltages.
The transition from the use of low-voltage (≤ 660 V) and medium-voltage (661 V –
1,000 V) to high-voltage (≥ 1,000 V) on longwall face equipment has been driven by the
objective to achieve increased production levels from fewer operating units. To achieve
this increased level of production, longwall panel width has substantially increased over
the past two decades. The average longwall panel width has increased from 620 feet in
1986 to over 960 feet in 2004, which is equivalent to a 55% increase. The depth of the
cutting web on the shearer has also increased. The average cutting depth has increased
from 30 inches in 1986 to almost 38 inches in 2004, which is equivalent to a 26%
increase. These sizeable changes have resulted in an increased power requirement for the
face equipment. Low and medium-voltage became inadequate for powering the higher
capacity motors that were being demanded by industry. The trend toward larger and
more complex longwall systems has resulted in a corresponding increase in the size of
longwall components as well as the standardization to high-voltage utilization (Basar and
Novak, 2003).
|
Virginia Tech
|
The transition to the higher voltages followed a natural progression by taking incremental
steps from low and medium-voltage levels to 2,400 V and ultimately 4,160 V. Initially,
2,400-V was utilized as the next logical step above the medium-voltage level (Novak et.
al, 2003). The first experimental permit for purely high-voltage on-board switching was
granted for a 2,400-V longwall system in July of 1985 (Boring and Porter, 1988). As
2,400-V systems proved their reliability, 4,160-V systems gained popularity. There were
also a number of hybrid systems in operation (2,400 V or 4,160 V for the face conveyor
motors and 995 V for all other equipment) during the transitional period from medium to
high-voltage (Novak and Martin, 1996). An increasing trend of 4,160-V utilization
began in the early 1990’s and continues today. During the early part of this decade the
percentage of longwall units operating at 2,400 V began to steadily decline, again in
favor of the 4,160-V systems. In 2000, the 4,160-V system surpassed the 2,400-V system
in total number of operating units.
The increase in voltage level has caused the industry to face complexities not experienced
with the lower-voltage systems. In an effort to ensure safety, Federal Regulations have
more stringent requirements for high-voltage systems. High-voltage systems are
mandated to have lower neutral grounding resistor (NGR) current limits, lower ground-
fault relay pick-up settings, and are, like the medium-voltage systems, required to use
shielded cables (Electrical Protection, 30CFR§75.814). These requirements directly
affect how the system responds during ground-fault events. As will be shown, this is
especially the case with the outby1 topology of the 4,160-V system.
1.2 Statement of the Problem
Initial research showed that the increased capacitance from the longer cable runs that
result from the outby configuration commonly used on the 4,160-V longwall power
systems can create a situation where the capacitive charging current that returns through
the unfaulted circuits during ground-fault events is large enough to cause spurious
tripping (Novak 2001-a, 2001-b, Novak et. al 2003, Novak et. al, 2004). When spurious
tripping occurs, ground-fault relaying selectivity is lost. A loss of ground-fault relaying
selectivity on a 4,160-V system may adversely affect both employee safety and longwall
productivity.
The capacitance in the 4,160-V system results primarily from the shielded configuration
of the power cable (Novak et. al 2004). Figure 2 shows the cross section of a typical
shielded, SHD-GC, high-voltage mining cable. Figure 2 also shows the nature of the
capacitance resulting from the shielded configuration of the cable. The total capacitance
in the system varies linearly with cable length. The outby switching configuration that is
most commonly used on 4,160-V systems dramatically increases the total system
capacitance, as compared with the inby2 configuration used on 2,400-V systems, as the
total length of cable is increased over 100%. The outby configuration is preferred by
industry since the motor-starting switchgear is kept more than 150 feet outby the
1 The term outby is defined as away from the working face or toward the mine entrance.
2 The term inby is defined as toward the working face, or interior, of the mine.
2
|
Virginia Tech
|
longwall face and therefore does not have to be housed in an explosion proof enclosure
(Novak and Martin, 1996).
Grounding Conductor Braided Copper Shield
Outer Jacket Phase Conductor
Conductor Insulation
Filler Material
Pilot Conductor
Phase Line-to-Ground
Conductor Capacitance
Shield
Insulation
(Dielectric Material)
Ground
Fig. 2. Cross-section of an SHD-GC type cable.
A low ground-fault relay pick-up setting increases the potential for the capacitive
charging current to cause spurious tripping of unfaulted circuits within the longwall
power system. The ideal ground-fault relay pick-up setting should be low enough to
protect against electrical hazards, mainly the risks associated with electrical shock, yet
should be set at the highest non-hazardous level to help avoid spurious tripping of
unfaulted circuits during ground-fault events. Therefore, the Federal Regulation that
mandates the relay pick-up setting also directly affects relaying selectivity within the
system. Subsequently, the mandated relay pick-up setting has been criticized as being
unnecessarily low (Novak, 2001-b). As a result, a question has arisen as to the actual
ramifications that the relay pick-up setting has on safety. Determining the effect that
raising the relay pick-up setting has on safety is important in determining potential
opportunities to improve relaying selectivity on high-voltage longwall power systems.
Problems with the ground-fault relay pick-up setting mandated by the Mine Safety and
Health Administration (MSHA) have been corroborated by a major longwall operator. In
conducting additional background research into relay pick-up settings, it was also found
that MSHA has recently written a citation to a longwall operator for violating the relay
pick-up setting. The company was citied for “…failing to set the circuit breakers to trip
at the required amperage” (MSHA vs. Loadstar Energy, 2003). The fact that the relays
were improperly set offers some evidence that problems do exist.
3
|
Virginia Tech
|
1.3 Scope of Research
The conducted research concentrated primarily on improving ground-fault relaying
selectivity on 4,160-V longwall mining systems. For a three phase system, various types
of faults are possible: a three-phase fault, phase-to-phase faults, phase-to-ground faults,
and double phase-to-ground faults. The importance of ground-fault protection cannot be
overemphasized as ground is involved in 75 - 85% of all fault events (Horowitz and
Phadke, 1995). To perform an analysis of ground-fault relaying selectivity only line-to-
ground faults need to be analyzed as a separate set of protective devices are employed to
protect against multi-phase faults.
The research was performed by using a computer based model of an average size 4,160-
V longwall power system to determine the systems behavior during ground-fault events.
The original model was created from previous research performed on a similar topic
(Novak 2001-a, 2001-b). Improvements were made to the model that focused on
determining more accurate resistances for the ground conductors as well as a more
accurate representation of the topology of the longwall power system. The size of the
equipment components was determined from the annual Longwall Census published in
Coal Age magazine (Fiscor, 2004).
Included in the research was an investigation into the effect that the ground-fault relay
pick-up setting has on safety. The improved model was used to determine the touch
potential that exits over a range of pick-up settings and values of body resistance. The
results were then compared to the physiological response of humans to electrical shock to
determine the risk hazard.
1.4 Thesis Structure
This thesis provides commentary on electrical safety and includes an explanation of high-
resistance grounding and ground-fault protection schemes for 4,160-V longwall power
systems. A detailed description of the model that was developed to simulate ground-fault
scenarios on an outby 4,160-V system will be given, along with the methodology used to
determine the component values in the model. The results of the simulations performed
for the various ground-fault scenarios will then be presented.
Two potential methods to improve ground-fault relaying selectivity were identified and
evaluated for their effectiveness. The two methods evaluated were the use of directional
ground-fault relay protection, and raising the magnitude of the ground-fault relay pick-up
setting. Based on the evaluations, recommendations will be made on how ground-fault
relaying selectivity can be improved.
4
|
Virginia Tech
|
Chapter 2. Background and Literature Search
2.1 General
Safety is the primary concern when designing a power system. Unfortunately, the harsh
environment of underground coal mining adds many variables that can cause short circuit
conditions (Novak et. al, 2004). As a result, a power system’s protection scheme must be
robustly designed to prevent injury. It is dually important to ensure that the system’s
performance is not compromised. The remainder of this chapter is dedicated to providing
background information on subjects ranging from electrical safety to the power system
protection schemes currently being used on 4,160-V longwall mining systems.
2.2 Electrical Safety
Between 1990 and 1999, electrical accidents were the fourth leading cause of death in the
mining industry (Cawley, 2003). During this period, the data showed that fatalities were
ten times more likely to occur when the accident involved electricity. Electrical accidents
tend to occur less frequently than other types of accidents, yet when they do occur they
tend to be far more severe.
The most frequent type of electrical accident involves electrical shock. Injuries from
electrical shock result from current flowing through the human body. The severity of the
electric shock is dependent upon the exposure time as well as the magnitude and
frequency of the current (Novak et. al, 1988). The estimated effects of 60 Hz currents
which pass through the body are provided in Table 1.
Table 1. Physiological response to current.
Current Level Physiological Response
1.1 mA Barely perceptible
6.0 mA Maximum Let-go current
50.0 mA Ventricular Fibrillation
2.0 A Cardiac Standstill
The table reports that a current of 1.1 mA is barely perceptible to the touch, while a
current of 6.0 mA can cause involuntary contraction of flexor and extensor muscles in the
forearm resulting in the inability for a victim to let go of any objects being held (DHHS,
1998). A current of 50 mA to approximately 2.0 A may cause ventricular fibrillation.
Ventricular fibrillation can lead to a quick death from lack of oxygen to the brain.
Ventricular fibrillation poses the greatest risk of death from electrical shock. Extensive
research has been conducted to determine the time-current characteristic which can cause
5
|
Virginia Tech
|
ventricular fibrillation (Sottile and Novak, 2001). One method was developed by Daziel
(Daziel and Lagan, 1941, Daziel, 1954, Daziel and Lee, 1969). Daziel’s alternating
current (ac) fibrillation prediction can be written to determine the maximum non-
fibrillation current for the total circuit clearing time (Novak et. al, 1988). The equation is
shown as follows,
116
I = (8.3 ms ≤ t ≤ 5.0 s)
t +t
1 2
where, I is the body current (mA),
t is the relay operating time (s), and
1
t is the circuit interrupter operating time (s).
2
To determine the ac fibrillation prediction for a protection system, the total clearing time
must be estimated. A generally accepted standard for relay operating time is 1 - 3
electrical cycles (Horowitz and Phadke, 1995). For a 60 Hz system, one electrical cycle
is completed every 0.0167 s. A vacuum type interrupter, which is the standard type used
on high-voltage longwall circuits, requires 4.8 cycles to operate [Siemens]. Assuming
the longest time of 3 cycles for the relay operation, the maximum tripping sequence is
estimated to be 7.8 cycles. By applying the clearing time of 7.8 cycles to Daziel’s ac
fibrillation prediction, the maximum current that will not result in fibrillation is
established to be 321 mA.
The level of current that will flow through the human body is directly related to the
voltage across the body as well as the body’s resistance. The presence of moisture from
standing water, wet clothing, or perspiration increases the possibility of electrocution
(DHHS, 1998, Sottile and Novak, 2001). All of these conditions are commonly found on
longwall faces. The level of current that will flow through the body can be calculated
using Ohm’s law, which states:
V
I =
R
where, V is the voltage across the body, and
R is the body resistance.
Research indicates that body resistance can vary from 10 kΩ down to 1 kΩ, and may be
as low as 200 Ω when the skin is broken. A value of 500 Ω is commonly used for
performing safety analysis (Sottile and Novak, 2001).
The most common shock hazard occurs when a person comes into direct contact with an
object that is at a significantly higher potential than earth (Sottile and Novak, 2001). If a
person contacts the frame of a faulted piece of equipment, the current that will flow
6
|
Virginia Tech
|
through the victim is dependent upon the fault current, the ground conductor impedance,
and the victim’s body resistance including contact resistance. Hazards that exist from the
elevation of frame potentials can be reduced by providing a low impedance ground path
and by controlling the maximum ground-fault current. This is accomplished by using a
neutral grounding resistor (NGR), which is discussed in the next section.
2.3 High Resistance Grounding
Grounding is attained by providing an intentional connection between a phase or neutral
conductor to earth. By providing a dedicated fault path for fault current to flow,
protective schemes can be developed to monitor for undesirable operating conditions.
These protective schemes can be designed to automatically respond and take corrective
action if undesirable operating conditions are sensed.
The explosive atmosphere in underground coal mining demands that the energy
dissipated by the fault resistance during ground-fault events be limited to reduce the
possibility of an explosion. This is accomplished by using resistance grounding, which
provides a practical method of controlling the amount of energy dissipated during a
ground-fault by limiting the magnitude of the fault current.
In resistance grounding, the system’s neutral is connected to ground through a resistor as
shown in Figure 3.
A
B
NGR
C
Fig. 3. Resistance grounding of wye system.
In underground coal mining, the system’s neutral is commonly obtained from the wye
connected secondary of the transformer. A wye system, as shown in Figure 3, is defined
as a system in which one end of each phase winding of transformers or alternating current
generators are connected together to form a neutral point, and the other ends of the
windings are connected to the phase conductors.
There are two categories of resistance grounding, each defined by the magnitude of the
current allowed to flow to ground. There is no defined standard for the level of ground-
7
|
Virginia Tech
|
fault current which defines these two categories, but it is generally accepted that the
ground-fault current level in high-resistance grounding is limited to a value less than 10
A while the ground-fault current level in low-resistance grounding is limited to at least
100 A (IEEE std. 142-1991).
2.4 Protective Relaying
Ground faults pose a potential safety risk to personnel. If undetected, ground-faults can
cause serious damage to equipment, and if they are not isolated they can develop into
more severe double line-to-ground faults (Wilks, 2003). Consequently, the function of a
ground-fault protection system is to detect and remove ground-faults from the power
system when they occur.
Ground-fault protection systems consist of three primary elements: transducers, relays,
and circuit breakers. Transducers are also known as voltage and current transformers.
The function of voltage and current transformers (VT and CT) is to transform the power
system’s voltages and currents to lower magnitudes and to provide signals to the relays
which are faithful reproductions of the primary quantities (Horowitz and Phadke, 1995).
For ground-fault detection, a single flux summing CT is used.
Current in each phase of a three-phase system can be mathematically described in terms
of positive, negative, and zero-sequence components. This method of describing a three
phase system is essential when dealing with asymmetrical faults. An example of an
asymmetrical fault on a three-phase system is a line-to-ground fault, while conversely an
example of a symmetrical fault would be a three-phase fault. The unbalanced phasors of
a three-phase system during a line-to-ground fault can be resolved into three balanced
systems of phasors, as shown in Figure 4. Resolving the unbalanced phasors of a faulted
three-phase system into a system of balanced phasors simplifies the calculation of the
fault current at the point of the fault. Once the fault current at the point of the fault is
determined, the current and voltage at various points in the system to be found
(Stevenson, 1975).
V
V V a2
c1 a1
V V V
a0 b0 c0
V
b2
V
b1 V
c2
Zero-sequence Positive-sequence Negative-sequence
Components components components
Fig. 4. Sequence components of phase voltages.
8
|
Virginia Tech
|
Under normal operations, or when a fault occurs that does not involve ground, there is no
zero-sequence component and the sum of the phase currents I , I , and I is zero. When a
a b c
ground-fault occurs, the sum of the phase currents I , I , and I will not be zero. In this
a b c
case, the value resulting from the summation of the phase currents is known as the
ground current.
The zero-sequence component only exists when the system experiences a fault involving
neutral. Thus, it is possible to detect a ground-fault by monitoring the zero-sequence
component. This is referred to as zero-sequence relaying. With zero-sequence relaying,
the three individual phase conductors are passed through the window of a single toroidal
CT while the grounding conductor is kept outside of the CT window (Novak et. al, 2003).
This is shown in Figure 5.
A
B
C
Fig. 5. Toroidal current transformer.
The arrangement in Figure 5 allows the CT to sum the flux produced by the three phase
currents and allows the CT secondary to see the ground current if an imbalance in the
phase currents exits. The ground current sensed by the CT secondary will be directly
proportional to the current on CT primary by the CT turn’s ratio as long as the CT is not
saturated.
Relays are the brains of the protection system. Relays process the data provided by the
voltage and current transformers to determine the operating state of the power system
(Horowitz and Phadke, 1995). If the power system is determined to be operating
abnormally, relays use previously established parameters to take corrective action. A
quick response to abnormal conditions is essential. Federal Regulations require that
high-voltage longwall mining systems use instantaneous relays inby the power center that
operate as soon as a decision is made, with no intentional time delay to slow down the
relay’s response.
Relays can be classified into different categories based upon the input parameters to
which they respond. Some of the different categories of relays are level detection,
magnitude comparison, differential comparison, phase angle comparison, pilot relaying,
and frequency sensing relaying (Horowitz and Phadke, 1995). Relays used in
underground coal mining exclusively use level detection as their operating parameter.
9
|
Virginia Tech
|
Level detection is the simplest principle of relay operation. Relays that use level
detection as their operating parameter to monitor current are also known as overcurrent
relays. When a predetermined level on an overcurrent relay is exceeded, the relay
initiates a trip sequence. This predetermined level is known as the relay’s pick-up
setting. There are many different types of relays, some of which are electromechanical
relays (which include induction disk and plunger-type), solid state relays, and
microprocessor based relays. High-voltage longwall power systems almost exclusively
use solid state relays.
2.5 Ground-Fault Protection
High-voltage longwall power systems have zero-sequence ground-fault overcurrent
protection located in the motor starting unit and power center. Figure 6 shows the
configuration of an outby 4,160-V longwall power system (Novak and Martin, 1996).
All outgoing circuits in the motor starting unit have instantaneous overcurrent ground-
fault protection. The protection in the power center is allowed to have a time delay of up
to 0.25 s in order to provide coordination with the protection located in the motor starting
unit (Novak et. al, 2004). As will be discussed in greater detail later in this chapter,
Federal Regulations limit the maximum current through the NGR to 3.75 A for 4,160-V
longwall system systems. The maximum ground-fault relay pick-up setting at the power
center is limited to 40% of the NGR current limit or 1.5 A for a 4,160-V system. The
maximum pick-up setting for the instantaneous ground-fault relays in the motor-starting
unit is 0.125 A.
480 V
480 V
480 V
5 MVA 480 V
13.8 kV Power
480 V
Input Center
4,160 V No. 1
4,160 V
Headgate 1 (800 hp)
4,160 V Motor-Starting
Unit 4,160 V
4,160 V
4,160 V
Fig. 6. Configuration of an outby 4,160-V longwall.
10
V
061,4
limck
052
ataD
Non -Permissible Permissible
Monorail used for
Auxiliary
cable handing Loads
480 V 120 V Lighting
Headgate LV Welder
Controls 480 V
Data, Emergency Stop, Lockout, PTO
Methane Monitor
2,000 ft
Tailgate (800 hp)
1,200 ft No. 1
1,200 ft No. 1 Headgate 2 (800 hp)
2,000 ft No. 1 Shearer (1,200 hp total)
1,200 ft No. 2 Stage Loader (500 hp total)
1,200 ft No. 2 Crusher (250 hp total)
limck
052
)ybtuO
tf
051(
480 V
480 V
480 V
5 MVA 480 V
13.8 kV Power
480 V
Input Center
4,160 V No. 1
4,160 V
Headgate 1 (800 hp)
4,160 V Motor-Starting
Unit 4,160 V
4,160 V
4,160 V
V
061,4
limck
052
ataD
Non -Permissible Permissible
Monorail used for
Auxiliary
cable handing Loads
480 V 120 V Lighting
Headgate LV Welder
Controls 480 V
Data, Emergency Stop, Lockout, PTO
Methane Monitor
2,000 ft
Tailgate (800 hp)
1,200 ft No. 1
1,200 ft No. 1 Headgate 2 (800 hp)
2,000 ft No. 1 Shearer (1,200 hp total)
1,200 ft No. 2 Stage Loader (500 hp total)
1,200 ft No. 2 Crusher (250 hp total)
limck
052
)ybtuO
tf
051(
|
Virginia Tech
|
2.6 Ground-Fault Relaying selectivity
There are three general terms which define the success of a relay operation - reliability,
dependability, and security (Horowitz and Phadke, 1995). The term reliability refers to
the degree of certainty that a relay will perform as intended. There are two possible ways
that a relay can be unreliable: a relay can fail to operate when it should, or it can
unwontedly operate when it should not. The reliability of relays can be described by the
terms dependability and security. The term dependability is defined as the measure of
certainty that a fault will be cleared. The term security is defined as the measure of
certainty that only the correct relay will operate to clear the fault. Power systems in
underground coal mining tend to be biased towards dependability at the expense of
security, as it is imperative that faults be cleared as soon as possible to limit the total
amount of energy dissipated during the fault event.
The property of security is defined topologically within a power system by regions.
These regions are known as zones of protection. A secure relay will only operate for a
fault within its assigned zone (Horowitz and Phake, 1995). The standard for designing
power system protection is to have overlapping zones of protection. This ensures that all
regions of the power system are protected and that a backup is provided in the event of
protection equipment failure. An example of the relaying scheme for a 4,160-V longwall
power system is shown in Figure 7.
Zone 2
Zone 1
R1 Shearer Motor
R2 Stage Loader Motor
Power
Center
R3 Crusher Motor
R7
R4 AFC Headgate Motor 1
R5 AFC Headgate Motor 2
R6 AFC Tailgate Motor
Motor
Starting
Unit
Fig. 7. Relay protection scheme.
11
|
Virginia Tech
|
When a ground-fault occurs within the shearer circuit, the instantaneous relay shown in
Figure 7 as R1 should operate, effectively removing the faulted circuit. If R1 fails to
operate, R7 should operate after the specified time delay of up to 0.25 s. In this case, R7
is considered a backup to R1, and the 0.25 s time delay is allowed for coordination.
When R1 operates properly, the system is selective. If R1 fails to operate and the relays
R2 – R6 operate, relay selectivity is lost. Selective relaying is the process of detecting
abnormal conditions and providing quick isolation of the abnormality while limiting the
amount of disruption to the entire power system. It is imperative that a ground-fault be
cleared as soon as possible, and that the protection system is selective when clearing the
fault. When multiple relays spuriously trip on an unselective system, power is often
turned back on in an effort to locate the fault. After the power has been turned back on,
the relays will trip again, but only after more power has been re-applied at the point of
fault (K-TEC, 1991). Selective relaying is essential in power system protection as it
reduces the troubleshooting necessary to locate a fault, thus reducing the time that miners
are exposed to the faulted system. Equipment downtime is also reduced.
2.7 Federal Regulations
Federal Regulations pursuant to the use of high-voltage longwall mining systems were
enacted into law in March 2002 (Basar and Novak, 2003). These regulations can be
found in the updated 30 CFR Parts 18 and 75 (Title 30CFR). The purpose of these
regulations is to ensure miner safety by reducing the likelihood of fire, explosion, and
shock hazards by citing requirements for electrical enclosures, circuit protection, and
personal protective equipment (USBM, 1997). From the inception of high-voltage
longwalls in 1986 until the time that the new Federal Regulations were enacted into law,
operators of high-voltage longwalls were required to file for a Petition for Modifications
on a case-by-case basis. Filing Petition for Modifications is a means for operators to
request a modification of a mandatory safety standard with the stipulation that the
modification provides the same level of safety as is provided by the existing standard.
When the first Petition for Modifications was proposed for a 4,160-V longwall mining
system in 1986, the current allowed to flow through the NGR was limited to 3.75 A and
the ground-fault relay pick-up setting was mandated at 0.125 A (Novak and Martin,
1996). During the early stages of high-voltage utilization, however, a NGR current limit
of 0.5 A and a ground-fault relay pick-up setting of 0.100 A became the generally
accepted standard for 4,160-V systems (Novak et. al, 2003). These values were initially
proposed by industry to help gain approval for the required Petition for Modifications. In
March 2002, the updated 30 CFR Parts 18 and 75 reversed the stance on NGR current
limits and ground-fault relay pick-up settings, and returned them to the original values
that were suggested for the first high-voltage longwalls in 1986.
12
|
Virginia Tech
|
Chapter 3. Model Development
3.1 General
A typical outby 4,160-V longwall power system is modeled in this chapter. The sizes of
the equipment components in the model are chosen to represent an average size system.
The following sections describe various aspects of the model, including the computer
program used to simulate the system and the premises used to determine the values
assigned to the equipment components.
3.2 PSpice
PSpice is a member of the SPICE family of circuit simulators. The acronym SPICE
stands for Simulation Program with Integrated Circuit Emphasis. PSpice was the first
SPICE-based simulator available for personal computers, and has been continually
updated since its release in 1984. The circuit simulation program PSpice® Version 8 was
used to simulate the system. PSpice has the capability of performing transient analysis of
a complex circuit while providing voltage and current waveforms at nodes and branches
throughout the given circuit.
3.3 Analysis Using PSpice Program
PSpice allows for a circuit to be drawn using graphic symbols that are stored in the
program’s internal symbol library. The attributes of the symbols can then be assigned
values. Once the circuit is drawn on a schematic page, a text file (“.cir”) is automatically
created for the circuit. This text file is also known as a netlist. A netlist is a list of
components and the nodes to which the components are connected. When a simulation is
initiated, PSpice reads from the netlist and then performs the requested analysis. The
result of the simulation is then stored in a text output (“.out.”) and a binary date file. The
result of the simulation can then be viewed graphically using an internal graphic viewer
which has the ability to plot voltage and current waveforms at locations throughout the
circuit (eCircuit Center).
3.4 Model Description
The circuit model shown in Figure 8 was developed for the computer analysis. The basis
of this circuit model was developed by Novak (2001-a, 2001-b). Novak’s model has
been altered to improve the topological representation of a longwall power system.
14
|
Virginia Tech
|
ground current as it returns to the neutral of the transformer during ground-fault events.
Determining the correct direction of the ground current is necessary for evaluating the
applicability of a directional relaying scheme. Also, the resistances of the ground
conductors were altered to represent their actual value. Novak’s model used the same
value of resistance for both the phase and ground conductors. The improved
representation of the ground conductors is necessary to ensure the accuracy of the safety
investigation involving touch potential. Also included in the model is a 0.1 µΩ resistor
located between the motor starting unit and the first value of cable capacitance. This
resistor is insignificant to the calculations but is necessary as PSpice requires a node to
measure current values. The zero-sequence voltages and currents are measured at this
location.
3.5 Component Modeling
This section borrows heavily upon research performed by Novak and Sottile (2002). The
premise behind the component modeling is acceptable for performing the transient
analysis (Glover and Sarma, 2002). The following subsections provide the detailed
calculations of the values for the various components.
3.5.a Transformer
The secondary of the power center transformer is modeled as three voltage sources with
series impedances connected in a wye configuration. The three voltage sources are
modeled as 2,400∠0°V, 2,400∠−120°V, and 2,400∠120°V. The series impedance of
the transformer is based upon a transformer impedance of 5% with an X/R ratio of 4.
The resistance and inductance of the transformer are calculated as follows:
The phase angle for the impedance is calculated by
X 4
φ =Tan−1 =Tan−1 = 75.96°
R 1
The per-unit impedance of the transformer can now be expressed as
Z = 0.05∠75.96°= 0.0121+ j0.0485pu
pu
The base impedance at the transformer’s secondary is given by
( )
kV2 4.16 2
Z = Base = = 3.46Ω
Base MVA 5
Base
and the impedance for the transformer, referred to the secondary, can be obtained from
16
|
Virginia Tech
|
( )
Z = Z ×Z = 0.0121+ j0.0485 ×3.46 = 0.0419+ j0.1678Ω
pu Base
PSpice requires that impedance be input in terms of its resistance and inductance or
capacitance. The transformer inductance is calculated by
X 0.1678
L = L = = 0.445 mH
ω 2π60
and the transformer’s resistance is R = 0.0419 Ω
3.5.b Neutral Grounding Resistor
The neutral grounding resistor (NGR) in the model is connected between the neutral of
the transformer and ground. The size of the NGR is determined by the maximum
ground-fault current allowed by Federal Regulation which is 3.75 A for 4,160-V systems.
The resistive value of the NGR required to limit the ground-fault current to this value is
calculated by:
4,160
V
R = 1φ = 3 = 640 Ω
NGR I 3.75
gf(max)
3.5.c Motors
The motors are modeled as three fixed wye-connected impedances. This method of
modeling provides sufficient accuracy for transient analysis of the system during fault
events. The impedances of the motors are calculated with the assumption that the motors
are operating at rated capacities with typical power factors and efficiencies. The
calculations for the equivalent impedances of the various motors follow:
3.5.c.i Headgate and Tailgate Motors
Around 83% of 4,160-V longwall mining systems in operation use three motors to drive
the armored face conveyor (AFC); the other 17% use two motors (Fiscor, 2004). The
model is developed to represent an average size 4,160-V system. Therefore, three motors
are modeled. Two of the three motors are located at the headgate while the other is
located at the tailgate. Because of the large horsepower ratings, each of the three motors
is supplied by a separate power cable. All motors have identical ratings, as shown in
Table 2:
17
|
Virginia Tech
|
The values of motor resistance and impedance are summarized in Table 6.
Table 6. Summarized motor values.
Motor Data
Rated Power Motor Equivalent Equivalent
Equipment Power Factor Efficiency Resistance Inductance
[hp] pf % [Ω] [mH]
Shearer 1,200 Total 0.90 95 14.09 18.10
Stage Loader 500 Total 0.90 95 35.71 45.86
Crusher 250 0.90 95 71.41 91.72
AFC Headgate 1 800 0.90 95 22.31 28.67
AFC Headgate 2 800 0.90 95 22.31 28.67
AFC Tailgate 800 0.90 95 22.31 28.67
3.5.d Cables
The impedance values assigned to the cables in the model were determined from data
provided in a mining cable handbook (Anaconda, 1977). The resistance, inductance, and
capacitance of the cables are included in the model. A typical 5-kV SHD-GC cable is
show in Figure 9 (General Cable, 2004).
Fig. 9. Picture of a 5-kV SHD-GC cable.
The capacitance in the model is solely from the cables - the capacitance from the
transformer and motor windings is ignored. In reality, the cable capacitance is distributed
along the entire length of the cable, but for simplicity the capacitance is shown in the
model as a lumped value that is halved and connected from phase-to-ground at both ends
of the cable. This is referred to as a π configuration and is considered to be standard
procedure for modeling cable capacitance (Chapman, 2002). Capacitance values for the
22
|
Virginia Tech
|
typical 5-kV SHD-GC cables used in the model are shown in the following Table 7
(Novak, et. al, 2004).
Table 7. Cable capacitance values.
Conductor Size Capacitance (per 1000 ft.)
#2 0.147 µF
#1 0.160 µF
The values of resistance and inductance for a cable are a function of the cable’s size and
length. The values of resistance and inductance are inserted into the model as lumped
impedances connected in a π configuration. The values of resistance, reactance, and
inductance for the cables used in the model are summarized in Table 8.
Table 8. Cable resistance, reactance, inductance.
Impedance of Cables (per 1,000 feet)
Cable Size Resistance Reactance Inductance
#6 AWG 0.552 Ω 0.043 Ω 0.114 mH
#5 AWG 0.438 Ω 0.042 Ω 0.111 mH
#2 AWG 0.218 Ω 0.038 Ω 0.101 mH
#1 AWG 0.173 Ω 0.036 Ω 0.0955 mH
The nomenclature assigned to cables is governed by the size of the cable’s phase
conductors. For example, a three-phase #1 AWG (American Wire Gauge) cable has
three #1 AWG power conductors. To improve the accuracy of the model, the ground
conductors were assigned resistive values based upon their AWG size. As shown in the
picture labeled Figure 9, 5-kV SHD-GC cable has two ground conductors. The two
ground conductors in a #1 AWG cable are #5 AWG while the two ground conductors in a
#2 AWG cable are #6 AWG (PD Wire & Cable, 2004). An equivalent single ground
conductor is shown in the model by combining the parallel ground conductors.
Determining a more accurate value for the inductance of the ground conductors is
unnecessary as the cable is primarily resistive. Therefore, for the simulations the ground
conductor’s inductance is given the same value as the phase conductor’s inductance. A
sensitivity analysis was performed to determine the effect of varying the value of ground
conductor inductance. The result of this analysis is provided in the forthcoming section
regarding sensitivity analyses. The values for the cables are shown in Table 9.
23
|
Virginia Tech
|
Chapter 4. Computer Modeling
4.1 Model Scenario
Simulations were performed using the model in Figure 8 to determine the systems
response during line-to-ground fault events. The results of the simulations are shown in
Table 10. The system model employed to obtain the values reported in Table 10 used a
frame contact resistance of 0.10 Ω, a grounding conductor resistance value equivalent to
the ground conductor size, and the standard inductance value determined by the phase
conductors. The phasor quantities are referenced to the system’s zero-sequence voltage.
A sensitivity analysis was performed on the model and is presented in the next section.
Table 10. Current sensed by ground-fault relays.
Fault Location
AFC AFC
Shearer
Headgate 2 Tailgate
Shearer 143º 0.84 -91º 0.84 -91º 0.84 -91º 0.84 -89º 0.85 -91º
AFC Headgate 1 -89º 4.78 138º 0.51 -91º 0.50 -91º 0.51 -89º 0.51 -91º
AFC Headgate 2 -89º 0.51 -91º 4.78 138º 0.50 -91º 0.51 -89º 0.51 -91º
AFC Tailgate -89º 0.84 -91º 0.84 -91º 4.54 143º 0.84 -89º 0.85 -91º
Stage Loader -89º 0.46 -91º 0.46 -91º 0.46 -91º 4.81 138º 0.47 -91º
Crusher 0.46 -89º 0.46 -91º 0.46 -91º 0.46 -91º 0.46 -89º 4.83 138º
25
niotacoL
yaleR
tluaF-dnuorG
Current [A] Sensed by Ground-Fault Relays for Faults at Various Locations
AFC Stage
Crusher
Headgate 1 Loader
4.50
0.50
0.50
0.83
0.46
As determined from the simulations, the minimum rms ground-fault current sensed by a
relay in a faulted circuit is 4.50 A. The maximum rms ground current sensed by a relay
in a non-faulted circuit is 0.85 A. It was also determined from the simulations that the
ground-fault current sensed by a relay in a faulted circuit lags the system’s zero-sequence
voltage in every case by 138o to 143o. The ground current sensed by a relay in the
unfaulted circuits, as would be expected, leads the system’s zero-sequence voltage by
approximately 90o. Figure 10 shows the waveforms of the ground-fault current sensed by
the relay in the shearer circuit for a fault at the shearer, the system’s zero-sequence
voltage, and the ground current sensed by the relay in the tailgate circuit. Only the
waveform for the tailgate circuit is shown as the ground current sensed by the relay in the
unfaulted circuits are all in phase with each other.
|
Virginia Tech
|
Sensitivity Analysis 1. Effect of Frame Contact Resistance
An analysis was performed to determine the effect that varying the frame contact
resistance has on the simulation results. Figure 11 shows where the frame contact
resistance was varied for the sensitivity analysis. For simplicity only the tailgate circuit
is shown in the figure.
Fig. 11. Sensitivity analysis for frame contact resistance.
A value of 0.10 Ω is chosen as the most realistic value for frame contact resistance.
Although the frames of the equipment are bolted together to form a single conductor,
there will always be some contact resistance from attributes such as paint and rust. To
determine the effect of varying the contact resistance the following values were also
simulated: no resistance, 1.0 Ω, 10.0 Ω, and 100 Ω. The tests were performed for a
PhaseC-to-ground fault in the shearer circuit. The results are shown in Table 11.
Table 11. Sensitivity analysis for frame contact resistance.
Contact Resistance
Shearer 143º 4.50 143º 4.50 143º 4.50 143º 4.50 143º
AFC Headgate 1 -89º 0.50 -89º 0.50 -89º 0.50 -91º 0.50 -93º
AFC Headgate 2 -89º 0.50 -89º 0.50 -89º 0.50 -91º 0.50 -93º
AFC Tailgate -89º 0.83 -89º 0.83 -89º 0.83 -91º 0.83 -93º
Stage Loader -89º 0.46 -89º 0.46 -89º 0.46 -91º 0.46 -93º
Crusher -89º 0.46 -89º 0.46 -89º 0.46 -91º 0.46 -93º
27
tluaF-dnuorG
niotacoL
yaleR
Frame Contact Resistance
Bolted, 1.0 Ω, 10 Ω, 100 Ω
Current [A] Sensed by Ground-Fault Relays Over Various Contact Resistances
Fault at the Shearer
Bolted 0.10 Ω 1.0 Ω 10.0 Ω 100 Ω
4.51
0.50
0.50
0.83
0.46
0.46
The result of the sensitivity analysis shows that the frame contact resistance has little
effect on the ground-fault current sensed by the relay in the faulted circuit. The phasor of
the ground current sensed by the relay in the unfaulted circuits responded as expected as
when the resistance increased, the phase angle correspondingly increased.
|
Virginia Tech
|
Sensitivity Analysis 2. Effect of Ground Conductor Inductance
An analysis was also performed to determine the effect that varying the ground conductor
inductance has on the simulation results. Figure 12 shows where the ground conductor
inductance was varied for the sensitivity analysis. For simplicity only the tailgate circuit
in shown in the figure.
Fig. 12. Sensitivity analysis for ground conductor inductance.
The values of ground conductor inductance were arbitrarily doubled and tripled. The
simulations were performed for a PhaseC-to-ground fault in the shearer circuit. The
results of the simulations are shown in Table 12.
Table 12. Sensitivity analysis for ground conductor inductance.
Current [A] Sensed by Ground-Fault Relays Over Various Ground Conductor Inductance Values
0.191mH 0.573mH
Varied Inductance
[1 x] [3 x]
Shearer 4.50 143º 4.51 143º 4.51 143º
AFC Headgate 1 0.50 -89º 0.50 -86º 0.50 -91º
AFC Headgate 2 0.50 -89º 0.50 -86º 0.50 -91º
AFC Tailgate 0.83 -89º 0.83 -86º 0.83 -91º
Stage Loader 0.46 -89º 0.46 -86º 0.46 -91º
Crusher -89º 0.46 -86º 0.46 -91º
28
tluaF-dnuorG
niotacoL
yaleR
Ground Conductor
Inductance 1x, 2x, 3x
Fault at the Shearer
0.392 mH
[2 x]
0.46
The result of this sensitivity analysis shows that the ground conductor inductance over a
reasonably defined range has little effect on the fault currents.
|
Virginia Tech
|
Sensitivity Analysis 3. Effect of Varying Faulted Phase
Finally, an analysis was performed to determine the effect that varying the faulted phase
has on the simulation results. Figure 13 shows the location of the phase-to-ground fault
that was varied in the sensitivity analysis.
Fig. 13. Sensitivity analysis for varying faulted phase.
For the sake of simplicity, the simulations were primarily performed with a PhaseC-to-
ground fault. To ensure the response of the model, PhaseA-to-ground and PhaseB-to-
ground faults were also tested. The sensitivity analysis was performed for a fault in the
shearer circuit. The results of the simulations are shown in Table 13.
Table 13. Sensitivity analysis for varying faulted phase.
Shearer 143º 4.50 143º 4.50 143º
AFC Headgate 1 -90º 0.50 -91º 0.50 -89º
AFC Headgate 2 -90º 0.50 -91º 0.50 -89º
AFC Tailgate -90º 0.83 -91º 0.83 -89º
Stage Loader -90º 0.46 -91º 0.46 -89º
Crusher -90º 0.46 -91º 0.46 -89º
29
tluaF-dnuorG
niotacoL
yaleR
Phase A, B, C Faults
Currents [A] Sensed by Ground-Fault Relays for Phase A, B, & C Faults
Fault at the Shearer
A - φ B - φ C - φ
4.51
0.50
0.50
0.83
0.46
0.46
These results show that the phase in which the fault occurs has no significant effect on
the magnitude or phase angle of either the ground-fault current sensed by the relay in the
faulted circuit or the ground current sensed by relay in the unfaulted circuits.
4.3 Directional Relaying
As determined from the simulations, when a ground-fault occurs on a 4,160-V longwall
power system, both the magnitude and phase angle of the fault currents are affected. The
|
Virginia Tech
|
simulations show that the ground-fault current sensed by the relay in the faulted circuit
and the ground current sensed by the relay in the unfaulted circuits are out of phase with
respect to each other. A group of protective relays exist that can identify changes in
phasor quantities. Directional relays, also known as phase comparison relays, compare
the relative phase angles between two ac quantities and use this information as a trip
parameter (Horwitz and Phadke, 1995). Directional relays require two inputs, the phase
angle of the current phasor, which varies with the direction of the fault, and a reference,
or polarizing quantity, that is independent of the fault location. For ground-fault relays,
the polarizing quantity is almost always the zero-sequence voltage (Andrichak and Patel).
The zero-sequence voltage can be used as the polarizing quantity as it is always in the
same direction regardless of the fault location. The zero-sequence voltage can be
obtained across the open corner of a wye-grounded, broken delta voltage transformer.
The sum of the three line-to-neutral voltages E , E , and E is zero for balanced
a b c
conditions and for faults that do not involve ground (Horowitz and Phadke, 1995).
The simulations show that the capacitive charging current returning in the unfaulted
circuits has a phase angle that leads the zero-sequence voltage by almost 90o, while the
ground-fault current in the faulted circuit lags the zero-sequence voltage by around 140o.
A simplified one-line diagram of a ground-fault scenario showing the returning
capacitive charging current is shown in Figure 14. The ground-fault relay in the
unfaulted circuit will sense a ground current equal to the capacitive charging current.
3I
0
89o
II = I
Φ = pf GFFR C1
V3E
0
II
-89o I llooaadd
load I
C1
I
F
RR
I
F
143o
II
Φ = pf GFFR
V3E
3I
0 143o I 0 II llooaadd
load
I I
F C2
RR
To NGR
Fig. 14. Phase angle comparison.
The scenario shown in Figure 14 is repeated in every simulated ground-fault event. As
determined by the sensitivity analyses, the phase angles of the fault currents polarized
against the system’s zero-sequence voltage are irrespective of frame contact resistance,
ground-conductor inductance, and the phase that goes to ground.
30
|
Virginia Tech
|
4.4 Effect of Neutral Grounding Resistor Value
An analysis was performed to determine the effect that the value of the neutral grounding
resistor (NGR) has on the phase angle of the fault currents. For the analysis, the value of
the NGR was varied from having zero to infinite resistance. In other terms, the system
was modeled over the range of being solidly grounded to ungrounded. Figure 15 shows
the phase angle of the ground-fault current polarized against the system’s zero-sequence
voltage over a range of NGR values. The phase angle of the ground-fault current and the
corresponding NGR value is shown in increments of 10o.
1800
0Ω
Solidly Grounded
3E
O
Ω
120
Ω
3 4 0
Ω
5 0 5
Ω
0
4
6 Ω
k
0 Ω
I
1.
k Ω
5
F 1. 4k Ω
k
2. 8
4.
900
31
Ω
dednuorgnU
I ~900
C
)
Ω
Ω0(
Ground Current
in Unfaulted Circuits
Ground-Fault Current
in Faulted Circuits
Fig. 15. Effect of NGR on the phase angle of the fault current.
It was discovered that over the NGR range of zero resistance to infinite resistance, the
ground-fault current sensed by the relay in a faulted circuit always lags the system’s zero-
sequence voltage by 90o to 180o. When the system is ungrounded (NGR = ∞ Ω), the
ground-fault current lags the zero-sequence voltage by 90o. When the system is solidly
grounded (NGR = 0 Ω), the ground-fault current is 180o out of phase with the zero-
sequence voltage. Figure 15 shows that the ground-fault current sensed by the relay in
the faulted circuit is always in the same quadrant, while the ground current sensed by the
relay in the unfaulted circuits always leads the system’s zero-sequence voltage by
approximately 90o.
|
Virginia Tech
|
4.4.a Directional Relay Availability
Directional ground-fault relays are available in electromechanical, solid state, and digital
designs. There are two approaches to providing directionality to an overcurrent relay:
directional control and directional overcurrent (Horowitz and Phadke, 1995). The design
of a directional control relay is such that the overcurrent element will not operate until
after the directional element operates. Although this method is the most secure, it is not
satisfactory in underground coal mining applications because the operating time will add
in series resulting in an additional delay. The other approach is the directional
overcurrent method. Directional overcurrent relays have independent contacts connected
in series with the circuit breaker trip coil. This allows for both relays to begin operation
simultaneously. A benefit of directional overcurrent relay scheme is the operating time
of the directional unit is so small that it can be neglected (Horowitz and Phadke, 1995,
GE, 2002).
4.5 Raising the Pick-up Setting
The results of the simulations as reported in Table 10 provide evidence that ground-fault
relaying selectivity could easily be improved while maintaining the use of overcurrent
relays if the pick-up setting for these relays is raised to a value less than the minimum
ground-fault current sensed by a relay in a faulted circuit but greater than the maximum
ground current sensed by a relay in the unfaulted circuits. The simulations show that
there is an ample range available to select a pick-up setting that is between these two
values for all ground-fault events.
Simulations to determine the touch potential from raised frame potentials were performed
at various levels of body resistance. A resistor was inserted in parallel with the ground
conductor to represent a person’s body resistance. A line-to-ground fault was then
inserted. The simulations were performed to determine the amount of current that would
flow through the resistor. Figure 16 shows the location of the resistor that was inserted.
For simplicity only the tailgate circuit is shown in the figure.
Body Resistance
Fig. 16. Simulation of safety analysis.
32
|
Virginia Tech
|
During a ground fault, the frame of the equipment is elevated for the duration of the
clearing time required by the protection system (Sottile and Novak, 2001). The current
that flows through the resistor is equivalent to the current that would flow through the
body of a victim if the victim were touching the frame and standing at ground potential.
Table 14 summarizes the simulations. The body current was recorded for every ground-
fault scenario. A body resistance of 500 Ω was used as this is the standard value used for
performing safety analysis (Sottile and Novak, 2001).
Table 14. Current through a 500 Ω body resistance.
Current [mA] through 500 Ω resistance parallel to ground conductor
Fault Location
Stage
Shearer Headgate 1 Headgate 2 Tailgate Crusher
Loader
Shearer 1.17 0.31 0.31 0.36 0.34 0.34
Headgate 1 0.32 1.03 0.29 0.33 0.31 0.31
Headgate 2 0.33 0.29 1.03 0.33 0.31 0.31
Tailgate 0.36 0.31 0.31 1.18 0.34 0.34
Stage Loader 0.35 0.31 0.31 0.39 1.11 0.33
Crusher 0.35 0.31 0.31 0.35 0.33 1.12
33
noitacoL
hcuoT
By comparing the results of the simulations shown in Table 14 to the value calculated
with Daziel’s fibrillation equation, it is determined that as long as the protection system
operates as designed, no risk of shock is posed from elevated frame potentials. The
maximum currents determined from the simulations would only be barely perceptible to
the touch. The simulations were also run using body resistances of 1.0 kΩ and 2.0 kΩ.
The values that resulted from these simulations were reduced by an equivalent percentage
compared with the values shown in Table 14. The simulations show that as far as touch
potentials are concerned, the ground-fault relay pick-up setting can be increased to a
reasonable level without compromising the safety of personnel.
An argument made in the defense of the low ground-fault relay pick-up setting on 4,160-
V longwall power systems is that the low pick-up setting reduces the magnitude of low-
level faults that can potentially persist in the system. A low-level fault is defined in this
thesis as a fault whose magnitude is below the relay’s pick-up setting. If the relay’s pick-
up setting is not exceeded, a low-level fault can remain in the system and continually
dissipate power as a function of the fault current and the fault resistance. This will
continue until the fault either clears itself or causes further degradation of the system
components and eventually exceeds the relay pick-up setting. Low-level faults can occur
during the initial breakdown of motor insulation, from cable splices that begin to fail, and
from the tracking of leakage current through a conductive material.
The remainder of this section compares the power dissipated through a fault resistance
whose value is selected to limit the ground-fault current to just below the ground-fault
relay’s pick-up setting. For simplicity, this current will be set at the relay’s pick-up
|
Virginia Tech
|
current. This scenario can occur as relay tolerances can change with age and use
(Horowitz and Phadke, 1995).
Figure 17 is a simplified diagram of a line-to-ground fault in a 4,160-V longwall power
system utilizing a 640 Ω NGR and a 0.125 A relay pick-up setting. The sum of the
system’s capacitance is inserted in parallel with the NGR. The fault resistance for this
scenario is solved using PSpice. It is determined that the fault resistance for this
configuration must be below 18.9 kΩ for a ground-fault current of ≥ 0.125 A to occur.
2,402-V
L-N
R =
F
18,892Ω
System NGR
Capacitance 640Ω
4.128µF
0.125 A
Fig. 17. Fault diagram with system capacitance included.
A calculation was performed to determine the amount of power dissipated through the
fault resistance in this low-level ground-fault scenario. The calculation is as follows:
Parameter Value
Rated Voltage 4,160 V
NGR Limit 3.75 A
Low-level Ground-fault 0.125 A
4,160
Vφ
3
R = = =640Ω
NGR I 3.75
gf(max)
2,402∠0V ( )
R = − 640Ω 4.128µF =18,895Ω
f 0.125A
P = I2R = 0.1252 A×18,895Ω = 0.295kW
The calculations establish that 0.295 kW of power could dissipate through the fault
resistance of a low-level ground-fault in a 4,160-V longwall mining system with a 0.125
A pick-up setting. The same series of calculations were performed for a 995-V power
34
|
Virginia Tech
|
system using a single-line diagram similar to Figure 17. Figure 18 shows the results of
various system configurations, including a calculation performed for a 995-V system with
a NGR current limit of 15 A and a 6.0 A pick-up setting. A 6.0 A low-level ground-fault
was used for this 995-V system.
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
Fig. 18. Power dissipated by the fault resistance during a ground-fault.
Figure 18 shows that more power can potentially dissipate through the fault resistance of
a low-level ground-fault in a 995-V system with a 6.0 A pick-up setting than could
potentially dissipate through the fault resistance of a low-level ground-fault in a 4,160-V
system with a 0.125 A pick-up setting. The calculations show the same amount of power
would dissipate through the fault resistance of a low-level ground-fault in a 4,160-V
system with a 1.0 A pick-up setting as would dissipate through the fault resistance of a
low-level ground-fault in a 995-V system with a 6.0 A pick-up setting.
Figure 19 shows the beneficial ramification of raising the ground-fault relay pick-up
setting to 1.0 A on a 4,160-V system. The graph shows the fault currents sensed by the
current transformers (CT) in the six separate fault scenarios. The elevated bars shown in
the graph are the ground-fault currents sensed by the CT’s in the faulted circuits, while
the lower bars are the ground current sensed by the CT’s in the unfaulted circuits.
35
)Wk(
rewoP
995 V 995 V 4,160 V 4,160 V
Voltage:
NGR Current Limit: 25 A 15 A 3.75 A 3.75 A
Low-level Ground-fault: 10 A 6 A 0.125 A 1.00 A
|
Virginia Tech
|
Fig. 19. Suggested pick-up setting for a 4,160- V longwall mining system.
Figure 19 shows that if the ground-fault relay pick-up setting was raised to 1.0 A, only
the relay in the faulted circuit would sense a ground-fault current great enough to cause a
trip sequence, thereby improving the system’s selectivity.
4.6 Summary
A model of a 4,160-V longwall power system was developed in PSpice and its response
was analyzed for line-to-ground fault events. The results of the analysis show that the
protective relaying scheme currently employed on 4,160-V systems may not be selective.
The simulations show that selectivity may be defeated during ground-fault events as the
capacitive charging current returning through the unfaulted circuits exceeds the Federally
Regulated ground-fault relay pick-up setting of 0.125 A. A sensitivity analysis was
performed on the model to determine the effect of varying the frame contact resistance,
ground conductor impedance, and for the phase which goes to ground. The sensitivity
analysis showed that these attributes do not significantly affect the results.
Two potential changes were identified that could improve ground-fault relaying
selectivity. These two methods were the application of a directional relaying scheme and
raising the ground-fault relay pick-up setting. Directional relays were found to be
applicable as the ground-fault current sensed by the relay in the faulted circuit lags the
system’s zero-sequence voltage while the ground current sensed by the relay in the
unfaulted circuits leads the system’s zero-sequence voltage. It was also determined that
relay selectivity could be improved by simply raising the relay pick-up setting on the
overcurrent relays to a value between the minimum ground-fault current sensed by the
relay in the faulted circuit and the maximum ground current sensed by the relay in the
unfaulted circuits. A safety analysis showed that raising the relay pick-up setting does
not increase the risk of shock from touch potential hazards.
36
|
Virginia Tech
|
Chapter 5. Conclusions
The continuing trend toward larger and more complex longwall mining systems has
resulted in a corresponding increase in the size of longwall components as well as the
standardization to high-voltage utilization. The increase in voltage over previous levels
has caused the industry to face complexities not experienced with the lower-voltage
systems.
In an effort to ensure safety, Federal Regulations have more stringent requirements for
high-voltage systems. Federal Regulations require that 4,160-V longwall power systems
use instantaneous overcurrent ground-fault relays inby the power center that operate
when the ground current is ≥0.125 A. This low ground-fault relay pick-up setting
increases the potential for the capacitive charging current that returns through the
unfaulted circuits during ground-fault events to cause spurious tripping. It is imperative
that spurious tripping be avoided and that a protection system be selective when clearing
faults. Selective relaying is essential in power system protection as it reduces the
troubleshooting necessary to locate a fault.
A typical outby 4,160-V longwall power system was modeled to test for selectivity. The
sizes of the components were chosen to represent an average size 4,160-V system. The
motors in the system were modeled as three wye-connected impedances. This method of
modeling provided sufficient accuracy for analysis of the system during ground-fault
events. The impedances of the motors were calculated with the assumption that the
motors are operating at rated capacities with typical power factors and efficiencies.
The simulations showed that ground-fault relaying selectivity is potentially lost during
ground-fault events as the capacitive charging current returning though the system in the
unfaulted circuits exceeds the Federal Regulation for ground-fault relay pick-up setting.
As determined from the simulations, the minimum rms ground-fault current sensed a
relay in a faulted circuit was 4.50 A while the maximum rms ground current sensed by a
relay in an unfaulted circuit was 0.85 A. It was also determined from the simulations that
the ground-fault current sensed by the relay in the faulted circuit always lags the system’s
zero-sequence voltage by 138o to 143o. The ground current sensed by the relay in the
unfaulted circuits always leads the system’s zero-sequence voltage by approximately 90o.
Two potential methods to improve ground-fault relaying selectivity were identified and
evaluated for their effectiveness. The two methods evaluated were the use of directional
ground-fault relay protection and raising the magnitude of the ground-fault relay pick-up
setting.
It was determined that directional relays have application to 4,160-V systems as when a
ground-fault occurs, the ground-fault sensed by the relay in the faulted circuit lags the
system’s zero-sequence voltage while the ground current sensed by the relay in the
unfaulted circuits leads the system’s zero-sequence voltage. The system’s zero-sequence
37
|
Virginia Tech
|
voltage can be used as the polarizing quantity for a directional relaying scheme as the
phasor value of the system’s zero-sequence voltage is irrespective of fault location.
It was also determined that ground-fault relaying selectivity could be improved by simply
raising the pick-up setting on the overcurrent ground-fault relays to a value between the
minimum ground-fault current sensed by the relay in the faulted circuit and the maximum
ground current sensed by a relay in the unfaulted circuits. A safety analysis showed that
increasing the ground-fault relay pick-up setting to a reasonable level would not
significantly increase the risk of shock from elevated frame potentials.
Calculations were also performed to determine the amount of power that is potentially
dissipated through the fault resistance of a low-level line-to-ground fault. During a low-
level ground-fault, a 4,160-V system with a 3.75 A NGR current limit and a 1.0 A pick-
up setting could dissipate 2.05 kW of power through the fault’s resistance. During an
equivalent low-level ground-fault, the currently permitted 995-V system with a 15 A
NGR current limit and a 6.0 A pick-up setting could dissipate 2.07 kW of power through
the fault’s resistance. Therefore, raising the ground-fault relay pick-up setting to 1.0 A
on a 4,160-V system would not cause any more power to dissipate during a low-level
ground-fault event than could potentially dissipate on a permitted 995-V system.
It was determined from the simulations that raising the ground-fault relay pick-up setting
to 1.0 A would improve ground-fault relaying selectivity on an average sized outby 4,160
V system. Raising the pick-up setting to 1.0 A is not an indefinite solution though, as the
continuing trend towards larger longwall mining systems will only exacerbate the issues
with ground-fault relaying selectivity. It was recently published that by March of 2005, a
longwall operator in the United States plans to increase its face width to 1,450 feet
(Hookham, 2004). The operator plans to use three 1,450 hp motors to power the armored
face conveyor. A system with similar component values was modeled, and it was
determined that the minimum rms ground-fault current sensed by a relay in a faulted
circuit would be 4.56 A and the maximum rms ground current sensed by a relay in an
unfaulted circuit would be 1.04 A. Therefore, the ground-fault relay pick-up setting for
this system would have to be raised above 1.0 A to improve relaying selectivity. As the
increase in longwall component size and panel dimensions will inevitably outpace the
rate at which the Federal Regulations governing the use of high-voltage longwalls is
updated, a directional relaying scheme should be considered as it is a practical long term
solution to improving ground-fault relaying selectivity that is irrespective of longwall
component size and panel dimension.
Suggested future research in this area would focus on confirming the accuracy of the
computer model that was used to determine a 4,160-V longwall mining system’s
response during a line-to-ground fault event. To verify the model’s response during a
line-to-ground fault event, field testing of an operating 4,160-V system would be
required. If properly recorded, the value of the ground-fault currents that occur during a
line-to-ground fault event on an operating 4,160-V system could be used to verify the
computer model. The ground-fault currents could be monitored at the system’s current
transformers using standard equipment. There are two possible methods to gather the
38
|
Virginia Tech
|
DEVELOPMENT AND IMPLEMENTATION OF A STANDARD METHODOLOGY
FOR RESPIRABLE COAL MINE DUST CHARACTERIZATION WITH
THERMOGRAVIMETRIC ANALYSIS
Meredith Lynne Scaggs
ACADEMIC ABSTRACT
The purpose of this thesis is to examine the potential of a novel method for analysis and
characterization of coal mine dust. Respirable dust has long been an industry concern due to the
association of overexposure leading to the development occupational lung disease. Recent trends
of increased incidence of occupational lung disease in miners, such as silicosis and Coal Workers
Pneumoconiosis, has shown there is a need for a greater understanding of the respirable fraction
of dust in underground coal mines. This study will examine the development of a comprehensive
standard methodology for characterization of respirable dust via thermogravimetric analysis
(TGA). This method was verified with laboratory-generated respirable dust samples analogous to
those commonly observed in underground coal mines.
Results of this study demonstrate the ability of the novel TGA method to characterize
dust efficiently and effectively. Analysis of the dust includes the determination of mass fractions
of coal and non-coal, as well as mass fractions of coal, carbonate, and non-carbonate minerals for
larger respirable dust samples. Characterization occurs through the removal of dust particulates
from the filter and analysis with TGA, which continuously measures change in mass with
specific temperature regions associated with chemical changes for specific types of dust
particulates. Results obtained from the verification samples reveal that this method can provide
powerful information that may help to increase the current understanding of the health risks
linked with exposure to certain types of dust, specifically those found in underground coal mines.
|
Virginia Tech
|
DEVELOPMENT AND IMPLEMENTATION OF A STANDARD METHODOLOGY
FOR RESPIRABLE COAL MINE DUST CHARACTERIZATION WITH
THERMOGRAVIMETRIC ANALYSIS
Meredith Lynne Scaggs
PUBLIC ABSTRACT
The purpose of this thesis is to examine the potential of a novel method for analysis and
characterization of coal mine dust. Respirable dust has long been an industry concern due to the
association of overexposure leading to the development occupational lung disease. Increases in
lung disease over the past decade has shown there is a need for a greater understanding of the
inhalable dust in underground coal mines. This study will examine the development of a standard
method for characterization of inhalable dust found in coal mines. This method was tested with
laboratory-generated dust samples similar to those commonly observed in underground coal
mines.
Results of this study show the ability of the novel method to characterize dust efficiently
and effectively. This method categorizes the dust into fractions of coal and non-coal, as well as
fractions of coal, carbonate, and non-carbonate minerals for larger dust samples. Characterization
occurs through removing particles of dust and subjecting them to thermogravimetric analysis
(TGA). Using TGA, samples are heated in a controlled environment and the change in weight of
the samples is monitored as they burn or break down in specific temperature ranges. Results
obtained from the laboratory-generated samples reveal that this method can provide powerful
information that may help to increase the current understanding of the health risks linked with
exposure to certain types of dust, specifically those found in underground coal mines.
|
Virginia Tech
|
ACKNOWLEDGEMENTS
First and foremost, I would like to express my sincere gratitude to my advisor, Dr. Emily
Sarver, for all the support and guidance she has given over the course of my graduate career. I
truly appreciate all of the time and effort she has used to help me throughout this experience.
I would also like to thank my committee members, Dr. Kray Luxbacher and Dr. Nino
Ripepi for their support and helpful suggestions.
I would like to extend a special thanks to Dr. Cigdem Keles, without her patience and
expertise with TGA and data analysis, this would not have been possible.
Many thanks are extended to all of the miners I was able to meet while gathering
samples. Their hospitality and zeal for their jobs was very inspiring for teaching me valuable
lessons about underground coal mining.
I would like to express my thanks to the Alpha Foundation for the Improvement of Mine
Safety and Health for providing the funding for this work.
Finally, I would like to thank my family and friends for supporting me through this
experience. I am particularly appreciative of my parents, Alan and Mary Beth Scaggs, my
brother, Carl Scaggs, my sister, Madeleine Chew, and my fiancé, John Witte, for their
encouragement over these past two years.
iv
|
Virginia Tech
|
Chapter 1. Considerations for TGA of Respirable Coal Mine Dust Samples
Meredith Scaggs, Emily Sarver, Cigdem Keles
Paper peer reviewed and originally published in the proceedings of the 15th North American
Mine Ventilation Symposium, June 20-25, 2015. Blacksburg, Virginia, preprints no. 15-48
Abstract
Respirable dust in underground coal mines has long been associated with occupational
lung diseases, particularly coal workers’ pneumoconiosis (CWP) and silicosis. Regular dust
sampling is required for assessing occupational exposures, and compliance with federal
regulations is determined on the basis of total respirable dust concentration and crystalline silica
content by mass. In light of continued incidence of CWP amongst coal miners, additional
information is needed to determine what role specific dust characteristics might play in health
outcomes. While particle-level analysis is ideal, current time requirements and costs make this
simply unfeasible for large numbers of samples. However, opportunities do exist for gleaning
additional information from bulk analysis (i.e., beyond total mass and silica content) using
relatively quick and inexpensive methods. Thermogravimetric analysis (TGA) may be a
particularly attractive option. It involves precise sample weight measurement in a temperature
controlled environment, such that weight changes over specific temperature ranges can be
correlated to chemical changes of particular sample constituents. In principle, TGA offers the
ability to determine the coal and total mineral mass fractions in respirable dust samples. Such
analysis could conceivably be combined with standard methods currently used to measure total
mass and silica content. Under some circumstances, TGA might also be extended to provide
information on specific dust constituents of interest (such as calcite). In this paper, we consider
the benefits and challenges of TGA of respirable coal mine dust samples, and provide
preliminary results and observations from ongoing research on this topic.
Keywords: CWP, Occupational lung diseases, Thermogravimetic Analysis (TGA), Respirable
dust, Silica.
1
|
Virginia Tech
|
Introduction
Over the past several decades, significant progress has been made toward improving
worker health and safety at coal mining operations in the US (Suarthana et al., 2011;NIOSH,
1974;WHO,1999). However, respirable dust (i.e., particles less than about 5µm in aerodynamic
diameter) is still a serious concern because exposures are associated with risks of occupational
lung diseases, namely Coal Workers’ Pneumoconiosis (CWP) and silicosis (USEPA, 2013).
These diseases can severely decrease quality of life by limiting lung function, and in some cases
may lead to progressive massive fibrosis (PMF), and can ultimately be fatal (USEPA, 2013;
Castranova and Vallyathan, 2000).
While federal regulation along with a variety of technological and operational
advancements have resulted in a significant decline of such diseases, incidence remains
unacceptably high – particularly in parts of Central Appalachia (Laney and Attfield, 2010; CDC,
2006; dos Santao et al., 2005). In some areas of this region, there appears to even be an increase
in the incidence of CWP and silicosis (Suarthana et al., 2011; Laney and Attfield, 2010; CDC,
2006; dos Santao et al., 2005). While the reason(s) for this has yet to be definitively determined,
some explanations point to unique mining conditions in this region. Indeed, these mines employ
a smaller workforce operating in thinner seams of coal (WHO,1999; Laney and Attfield, 2010;
CDC, 2006; Schatzel, 2009). The reduced seam heights lead to mining of rock strata above and
below the coal (i.e. the roof and floor), which may increase total dust exposures as well as
exposures to specific types of particles based on their composition, size or shape. Moreover, the
mining methods and mine sizes may also contribute to unique dust exposures. Continuous miners
are generally employed with auxiliary support (e.g., roof bolting and shuttle car haulage), and
most jobs have the potential for dust generation. Also, due to relatively small crews, many
miners can perform a variety of jobs and thus work in a variety of conditions.
1.1. Current Sampling and Analysis Methods for Respirable Coal Mine Dusts
In May 2014, the Mine Safety and Health Administration (MSHA) released a new rule
regarding respirable coal mine dust exposures (Federal Register, 2014). By August 2016, the
permissible exposure limit (PEL) will be reduced from 2.0 to 1.5 mg/m3 in production areas of
mines; and from 1.0 to 0.5 mg/m3 in entries used for ventilation and for “Part 90” miners (i.e.,
individuals already diagnosed with CWP). Moreover, in mines where respirable dust is
2
|
Virginia Tech
|
comprised of greater than 5% quartz (by mass), the PEL is decreased to a mine-specific PEL in
order to reduce health risks (see Ref) (Suarthana et al., 2011; Federal Register, 2014; 30 CFR
Part 75). If a mine has silica content greater than 0.5 mg/m3, extended cuts with a continuous
miner (i.e. production cuts greater than 20 feet prior to roof bolting) are also prohibited. (30 CFR
Part 75). To demonstrate compliance with the regulatory limits, personal dust monitoring is
required for miners working in designated occupations which are identified by the increased risk
for high dust exposure, such as the continuous miner or roof bolter operator (Federal Register,
2014; Colinet et al., 2010; Reed et al., 2008). Additionally, operators take samples in designated
areas, including areas in the working face that are known for high dust generation for
atmospheric concentrations and potential exposure for workers (Federal Register, 2014). The
new dust rule requires that compliance monitoring now be conducted when production is at least
80% of full production levels (i.e., as opposed to the 50% threshold that was required previously)
(Federal Register, 2014).
Presently, dust monitoring involves collecting a full-shift sample with a permissible
pump (i.e., certified intrinsically safe), sampling tube, and Dorr-Oliver cyclone (nylon, cut point
of ~4 µm). Samples are collected onto polyvinyl chloride (PVC) filters of known weight housed
in pre-assembled cassettes (Colinet et al., 2010; Zefon, 2015). The pump is run at a flow rate of
1.7 L/min to mimic the rate of human respiration (Colinet et al., 2010; Zefon, 2015); it is turned
on when the miner enters the mine and left running until the miner returns to the surface. The
sample is then shipped to a certified lab for analysis.
Analysis of respirable dust samples currently includes two results: the total sample
weight, which can be converted to a mass concentration of exposure (mg/m3), and the mass
fraction of crystalline silica in the sample. The sample weight is determined gravimetrically (i.e.,
by difference between the filter weight before and after sample collection) (Colinet et al., 2010;
Zefon, 2015; Bartley and Feldman, 1998), and the silica fraction is determined by infrared
spectroscopy (IR) by either NIOSH Method 7603 or MSHA Method P7 (Schlecht and Key-
Schwartz, 2003; MSHA, 2014). For both methods, the PVC filters are ashed to remove organic
matter (i.e. coal dust and the filter) and unoxidized material is redeposited on a vinyl acrylic
copolymer filter, which can be scanned with IR (Schlecht and Key-Schwartz, 2003; MSHA,
2014)..
3
|
Virginia Tech
|
As of February 1, 2016, compliance monitoring will also include use of the continuous
personal dust monitor (CPDM) for miners working in high-dust areas (Federal Register, 2014).
The CPDM is a wearable unit that allows quasi real-time monitoring of total respirable dust
exposures by measuring incremental changes in the weight of a filter as it collects dust over time.
The idea is that miners can track their exposures during their work and make timely decisions to
reduce their health risks. The CPDM does not allow for determination of silica content in
respirable dust, and so silica must still be measured on samples collected and analyzed as
described above. In order to provide more timely information regarding silica exposures, NIOSH
is currently researching methods for direct-on-filter analysis that could be used immediately
following sample collection (i.e., end of shift) (Colinet et al., 2010; Reed et al., 2008; Sellaro,
2014; Tuchman, 1992; Tuchman et al., 2008). While an ultimate goal would be real-time
measurement of silica, end-of-shift results would certainly be an improvement over current
methods.
1.2. Needs for Expanded Analysis
The field is indeed advancing toward faster capabilities for quantifying respirable coal
mine dust exposures by total concentration and silica content, the two focal points of current
regulation. But many other exposure aspects may be useful in understanding health risks and
effects, particularly in light of apparent differences in lung disease rates between various coal
mining regions (Suarthana et al., 2011; Castranova and Vallyathan, 2000; CDC, 2006).
Regarding the dust itself, characteristics such as particle shapes, sizes and chemistries may all be
important. For instance, particle size and shape may play a role in the how well dust can
penetrate and become embedded in lung tissue (Federal Register, 2014; Mischler, 2014), and a
combination of size and chemistry may influence the relative reactivity of particles within the
respiratory system (Mischler, 2014; NIOSH, 1991). Ideally, many individual particles could be
analyzed to determine distributions of these characteristics. In reality, this is possible by methods
such as scanning electron microscopy with energy-dispersive x-ray (SEM-EDX) – but far from
feasible at large scale due to costs and time requirements (MSHA, 2014). However, there is
potential to gather more data from dust samples than is currently done, without having to
examine individual particles.
4
|
Virginia Tech
|
An objective of ongoing research by the authors is to develop efficient and relatively
inexpensive methods for expanded analysis of respirable coal mine dust samples. Currently, we
are focused on opportunities for using thermogravimetric analysis (TGA).
Thermogravimetric Analysis
TGA is used to monitor weight change of a sample as it is exposed to changing
temperature in a given atmosphere (Coats and Redfern, 1963). Weight change is generally
plotted as a function of temperature or time on a thermogram (Coats and Redfern, 1963; TA
Instruments, 2006), and this information can be interpreted to understand chemical changes in
the sample as it is heated. In some cases, TGA can be combined with additional analyses (e.g., to
characterize the volatiles or reaction products that are generated as a sample decomposes) (Coats
and Redfern, 1963; Cheng et al., 2010; Mu and Perlmutter, 1981; Hills, 1968; Gabor et al.,
1995). In the context of coal, TGA has long been used to conduct proximate analysis, in which
the goal is to determine the ash content of the coal (i.e., the non-combustible mineral fraction)
(ASTM, 1994; Mayoral et al., 2001; Li et al., 2009). TGA has also been used for rank
classification of coal samples (Mayoral et al., 2001).
The TGA instrument is comprised of two key components: the furnace and the balance.
With tight control over the furnace chamber conditions (i.e., temperature and atmosphere) and a
highly sensitive balance, experiments can be conducted with very good precision – for instance,
allowing measurements of weight changes on the order of just a few μg. This ability has allowed
proximate coal analysis to be done on very small sample sizes (ASTM, 1994; Mayoral et al.,
2001). It also potentially provides an option for analysis of respirable dust samples from coal
mines, which are typically on the order of tens to hundreds of μg.
2.1. Considering TGA for Respirable Dust Samples
At present, we are investigating the efficacy of TGA to estimate the mass fractions of
coal (i.e., organic) and mineral (i.e., inorganic) content in respirable dust samples. For a very
basic estimate, TGA of dust samples can be treated as analogous to proximate analysis of bulk
coal samples: The coal content is oxidizable, and so is assumed to totally degrade (i.e., lose all of
its mass) during the TGA process; whereas the mineral content does not appreciably degrade or
5
|
Virginia Tech
|
react, and so the remaining residue at the end of the TGA experiment is taken as the total mineral
mass. Figure 1.1 illustrates hypothetical thermograms for this general example.
Figure 1.1. Hypothetical thermograms for (a) direct-on-filter and (b) dust only TGA of a
respirable coal mine dust sample. For the direct-on-filter conceptualization, the filter media is
assumed to decompose completely prior to coal oxidation.
In reality, the inorganic matter in a dust sample from a coal mine may include a number
of different minerals from different sources. Minerals such as silica, silicates, or carbonates may
be associated with shales or sandstones that make up roof or floor rock; and minerals such as
pyrite or chloride salts may be ingrained in the coal seam. Of these, only carbonates are expected
to react significantly within the same temperature range as coal. Carbonates can thermally
decompose to mineral oxides and carbon dioxide, with the conversion of calcite (CaCO ) to
3
calcium oxide and carbon dioxide (CaO + CO ) being a common example (Sellaro, 2014; Cheng
2
et al., 2010; Mu and Perlmutter, 1981; Hills, 1968; Gabor et al., 1995). Thus, a more accurate
estimate of coal and mineral fractions within a dust sample by TGA might necessitate separation
of coal oxidation from carbonate decomposition.
The issue of carbonate content in coal mine dust samples is further complicated by “rock
dusting” activities. Rock dust is primarily composed of calcite and/or dolomite (CaMg(CO ) ),
3 2
and dusting is required in certain areas of mines to prevent propagation of coal dust explosions
(30 CFR Part 75). In areas with heavy rock dust applications, or when the rock dust product has a
6
T e m p e r a t u r e (cid:9)
(cid:9)
t
h
g
ie
W
mv
o
o isla t ule r e (cid:9)as
l(cid:9) o
ns d (cid:9)s
(cid:9)
mv
(
o
b
o is tla
( a
) (cid:9)
u r e (cid:9)ale
s l(cid:9) o
) (cid:9)
ns d (cid:9)s
(cid:9)
filt e r l(cid:9) o s s (cid:9)
c o a l(cid:9) l(cid:9) o s s (cid:9)
m in e r a l(cid:9)r e s id u e (cid:9)
|
Virginia Tech
|
high proportion of very fine particles, rock dust might contribute significantly to the total
respirable dust concentration (30 CFR Part 75). TGA of samples from such areas should
therefore consider calcite and/or dolomite, specifically; otherwise a simple proximate analysis
approach as described above may overestimate the coal dust fraction.
The potential for using TGA to specifically estimate rock dust mass in a sample may also
be of interest because it could allow operators to understand the influence of their rock dusting
programs on respirable dust concentrations in the mine environment. As dust exposure limits are
reduced with new regulation, understanding which activities are contributing dust is critical for
compliance efforts. While the main components of rock dust are not generally considered to
adversely affect lung health, regulatory dust limits are currently aimed at total dust concentration
(and silica mass content) – and so even innocuous dust particles are concerning.
Development of a TGA Method for Respirable Coal Mine Dust Samples
In principle, TGA of coal mine dust samples could be done as an intermediate step
between current standard methods for assessing the total weight of a sample and its mass fraction
of silica (i.e., NIOSH 7603 or MSHA P7) (Schlecht and Key-Schwartz, 2003; MSHA, 2014).. As
illustrated in Figure 1.1, TGA might be done directly on the filter used to collect the dust sample,
or on dust that has been removed from a filter. In either case, due to very small sample masses, a
very sensitive TGA instrument is required.
For development of TGA method for respirable coal mine dust samples, we are using a
Q500 Thermogravimetric Analyzer (TA Instruments, New Castle, DE). The Q500 employs a
microbalance with 0.1μg resolution, and its vertical furnace eliminates some thermal influence
on the balance (Cheng et al., 2010; Colinet and Listak, 2012). The instrument is highly
programmable, such that users can create precise methods that may be run without interference.
Our instrument is also equipped with an autosampler, which provides the ability to run up to 16
separate samples in sequence. Platinum sample pans are used due to their inertness across a wide
temperature range and because they are easy to clean.
To date, our method development work has focused on both direct-on-filter and dust-only
TGA of respirable coal mine dust samples.
7
|
Virginia Tech
|
3.1. Direct-on-filter TGA
For a direct-on-filter method, the idea is simply to “ash” the entire sample filter in the
TGA instrument. As such, an understanding of the filter media behavior as it is heated, and any
potential interactions between it and the sample matrix, is needed. Ideally, the filter media:
decomposes in a separate temperature range from the sample matrix; is highly uniform with
respect to its ash content; and can be folded to fit in the TGA pans without significant mass loss.
Considering the relative weight of filters (i.e., tens of mg) versus a typical dust sample (i.e., tens
to hundreds of μg), decomposition of the filter at a different temperature than the coal (and other
sample components such as calcite) is particularly important. Moreover, compatibility of the
filter media with current dust sampling and analysis protocols should be considered.
Thus far, two filter media types have been evaluated: PVC and MCE (mixed cellulose
ester). Both filter types are available in the 37 mm size commonly used for dust sample
collection in underground coal mines, and both can be used for respirable dust sampling,
specifically (Danley and Schaefer, 2008; Zefon, 2012 and 2015). Table 1.1 provides a
comparison of key characteristics, with favorable characteristics denoted by a star.
Table 1.1. Comparison of PVC and MCE filter media characteristics
PVC MCE
Non-hygroscopic Hygroscopic
Static charging possible Low static charging
Some ash content Virtually ashless
Pliable Tears easily
PVC is currently used for respirable dust sampling in coal mines, and so is favorable
from the perspective of utilizing TGA as an intermediate step between current gravimetric (i.e.,
total dust sample weight) and silica content analyses. However, PVC filters generally have ash
content, which could complicate determination of mineral content in the dust sample matrix; and
they also are subject to static charging issues (Zefon, 2015). MCE, on the other hand, is
considered ashless and not susceptible to static buildup (Zefon, 2012). But the material is
relatively hygroscopic, meaning it can easily absorb moisture, and this is problematic from the
8
|
Virginia Tech
|
standpoint of current gravimetric analysis (i.e., accurately determining the dust sample weight is
difficult) (Zefon, 2012).
3.1.1. Summary of Experiments and Results
To test the suitability of PVC and MCE filters (37 mm, 5μm pore size) for direct-on-filter
TGA of respirable coal mine dust samples, preliminary experiments were conducted (see Keles
et al., 2015, for more details). Blank filters of each type (n=20) were ashed under a variety of
conditions to observe their behavior; and several samples of pulverized raw coal (with varying
mineral content) have also been ashed to simulate a dust sample that might be collected
underground. Figure 1.2 shows typical thermograms for PVC, MCE and coal dust TGA
experiments conducted in air (i.e., oxidizing environment). The main observations from these
experiments were:
• Coal oxidation occurs above about 425°C; at lower temperatures, some moisture
and volatiles are also lost.
• PVC filters weight between about 15-18mg. They decompose in two primary
stages (i.e., around 285°C, and then above about 450°C); the latter stage overlaps
significantly with coal oxidation. The weight change ratio between these two
stages of decomposition is not reproducible enough to predict the weight change
in the coal oxidation region with sufficient accuracy. Ash in PVC filters tested is
highly reproducible and accounts for about 0.13 ± 0.02% of total filter weight.
Static charging was not observed to be a significant issue.
• MCE filters weigh between about 35-37mg. They decompose primarily below
425°C (i.e., losing about 98.5% of their weight), and the weight change ratio
between decomposition before 425°C and after is highly reproducible. MCE ash
content is also highly reproducible, and accounts for about 0.03 ± 0.01% of the
total filter weight. Filter pliability can be increased misting the filters with high
purity water during folding.
9
|
Virginia Tech
|
Figure 1.2. Example thermograms for blank PVC and MCE filters (primary y-axis) and a raw
coal sample (secondary y-axis). The PVC filter has two regions of weight loss, which span
relatively wide temperature ranges, whereas the MCE filter loses most of its weight in one very
narrow region. Coal oxidation is significant at temperatures above about 425℃.
For the MCE filters, pre- and post-weighing the filter may not provide an accurate sample
weight due to moisture uptake, so the idea was to interpret the TGA results to determine the dry
sample weight (i.e., by using the known filter decomposition rate and ash content as previously
determined
For the dust on PVC filters, the coal and mineral fractions could be determined with good
accuracy (i.e., as compared to the known ash content of the coal sample used to generate the
dust). The coal and mineral fractions were determined using a simple proximate analysis
approach: the dust sample weight was found as the difference between pre- and post-collection
filter weight; the dust mineral weight was found as the difference between the final residue
weight and the known ash content of the filter; and the dust coal weight was found as the
difference between the dust sample weight and the dust mineral weight. Such analysis could
certainly be conducted between current gravimetric and silica analyses for respirable dust
samples; indeed, a sensitive TGA is not even needed for this, only the furnace and appropriate
microbalance that are already used. However, this approach does not allow for determination of
specific mineral components (e.g., calcite) of a dust sample.
10
|
Virginia Tech
|
Despite promising results from experiments on raw coal material and MCE separately,
results from TGA of dust on these filters proved that direct-on-filter analysis is likely not
possible. Figure 1.3 illustrates the reason for this. When the MCE begins to decompose just
below 200°C, it appears that the coal particles immediately oxidize as well. As can be seen in the
figure, the weight loss around this temperature associated with the dust-laden filter accounts for
more than the filter weight; it also accounts for loss of most of the dust itself. This result was not
initially expected, since in coal material-only experiments the primary weight loss did not occur
until temperatures above 425°C. However, the result makes sense when considering that,
although the furnace chamber temperature may only be around 200°C when the MCE filter
decomposes, the local temperature where this reaction is happening should be much greater, and
thus triggered spontaneous combustion of the coal particles. Considering the very fine size of the
particles, and hence their large surface area, this result is not so surprising in retrospect. This
explanation is supported by the small spike in furnace chamber temperature that can be seen
Figure 1.3.
Figure 1.3 Thermograms (weight on the primary y-axis vs. time) for a blank MCE filter and an
MCE filter with dust generated from a raw coal sample; temperature is shown on the secondary
y-axis. The difference in initial weights is about 80μg, with the dust-laden filter being heavier
than the empty filter; the difference in filter weights after the significant decomposition just
above 200°C is about 30μg, with the dust-laden filter now being lighter than the empty filter.
This result indicates that the coal dust spontaneously combusted when the MCE filter
11
te m ps ep raik
e
tu
M
M
re
C
a y
E
n
f i
o
l
t
t e
b
r
e
s :
a s u i t a b l e o p t i o n …
|
Virginia Tech
|
decomposed. The furnace chamber temperature spike indicates that the MCE decomposition did
indeed create significant heat.
In summary, direct-on-filter TGA using PVC filters is possible, but will not likely yield
results that provide insights beyond a basic ratio of oxidizable to nonoxidizable content in a dust
sample. Direct-on-filter TGA using MCE does not appear favorable at all, since the hygroscopic
nature of the filters makes dust sample weight difficult to determine directly, and sample
decomposition cannot be distinguished from filter decomposition during the TGA procedure.
Moreover, if determination of rock dust content in respirable coal mine dust samples is
important, the sample will likely need to be removed from filters prior to TGA. This is because,
similar to the effect that MCE filter decomposition has on spontaneous coal oxidation at
relatively low furnace temperatures, calcite and dolomite may thermally degrade earlier than
expected when in contact with the MCE material. Alternatively, an inert filter across the
temperature range required to completely oxidize coal particles (e.g., glass fiber) might provide
an option for direct-on-filter TGA with the opportunity to estimate rock dust content. However,
this option could not be easily integrated between the current standard methods for gravimetric
and silica content analyses.
3.2. Dust-only TGA
To increase resolution of TGA results and allow for evaluation of specific components of a
respirable coal mine dust sample, particles may be removed from the filter on which they were
collected. In principle, dust removal can be done on any filter – including perhaps the small glass
fiber filters that are used in CPDMs. A procedure similar to that described in the sample
preparation sections of the NIOSH 7603 or MSHA P7 can be used; in these methods, it is
necessary to remove the silica-containing residue from a secondary filter following ashing of the
PVC sample collection filter. In short, the filter is submersed in a tube of isopropanol, which is
then briefly placed in an ultrasonic bath (or sonicator). The ultrasonic energy shakes the dust
particles from the filter, which can then be removed from the tube, and the isopropanol is then
evaporated. The residue in the tube then contains the dust particles. A fundamental assumption
for a dust-only TGA method will of course be that the dust removed from the filter is
representative of the entire sample on the filter.
12
|
Virginia Tech
|
3.2.1. Preliminary Observations Regarding Feasibility of Dust Removal
Preliminary experiments are underway to investigate the feasibility of removing
respirable dust from PVC and MCE filters (37 mm, 5μm pore size) that are compatible with
approved dust sampling pumps for underground coal mines, and also the glass fiber filters that
are specifically manufactured for use with the CPDM. Based on the interference between filter
decomposition and coal dust oxidation observed during direct-on-filter TGA experiments, one
major goal of the current work is to determine how to maximize dust particle removal while
minimizing filter degradation that results in filter media particles being present in the removed
dust sample.
To date, several important observations have been made regarding MCE and PVC filters:
• Isopropanol is not an appropriate medium for conducting the ultrasonic dust
removal. In both cases, the filter media react with the isopropanol. Testing is
ongoing with deionized water, which appears promising.
• For blank filters, sonication times of 0.5-3.0 minutes appear to have similar
effects on filter degradation, meaning that similar amounts of filter residue result
from these times. The residue is on the order of tens of μg, which should
dramatically reduce the tendency for filter decomposition to spur dust
decomposition during TGA of removed dust samples. Sonication for longer
periods of time results in the filters breaking down significantly, and thus a
significant mass of filter residue may end up in dust samples removed from the
filters.
• TGA of residue from sonication of blank filters shows similar results to TGA of
the blank filters themselves. This indicates that filter particles present in removed
dust samples should behave similarly
• Significant dust can be removed from filters. At present, it appears dust removal
from the CPDM filters is more efficient than from PVC and MCE filters. This is
likely due to the smaller surface area of the CPDM filters (i.e., 14 mm in
diameter), which allows a thicker layer of dust to accumulate vs. the 37 mm
filters.
While TGA experiments on dust removed from filters has not yet been completed, the
above observations provide some promise that a method can be developed.
13
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.