University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Virginia Tech
|
following page shows a summary of these incidents. Three of these documented seal explosions resulted
in fatalities โ the Blacksville No. 1 mine, Sago mine, and Darby mine. It should be noted that in the
Blacksville No.1 mine, which occurred in 1992, the explosion and resulting explosive forces occurred
during the closure of the mine site and the capping of the production shaft. Because the opening of the
production shaft had been reduced to approximately 22 inches in diameter this greatly increasing the
explosive pressure present at the time of the explosion. The production shaft had initially been partially
capped, and the incident occurred during the installation of dewatering castings. The partial capping
allowed for a buildup of methane and a decreased amount of fresh air ventilating the shaft area below the
cap. The sparks produced by welding on the top of the cap caused an explosion to occur directly beneath
the cap and working personnel, resulting in the deaths of four miners (Rutherford, Painter, Urosek,
Stephan, & Dupree Jr, 1993). In the case of the 2006 Sago mine explosion, ten seals (constructed 22 days
prior to the incident) were destroyed in an explosion involving approximately 400,000 cubic feet of
methane gas. While the cause of the explosion was determined to be the result of lightning strikes in the
area, the seals within the mine were designed to withstand explosive forces of 20 psi, far below the actual
explosive force caused by the explosion. The newly constructed seals allowed for methane to build to
explosive levels behind the seals, and the subsequent explosion resulted in the death of 12 miners (Gates,
et al., 2006). Less than five months later, another five miners lost their lives in a similar explosion at the
Darby mine. The three seals that failed in the Darby explosion were constructed approximately two
months prior to the explosion event and were again built to withstand 20 psi explosive pressures. Prior to
the explosion, metal roof straps were being cut in the vicinity of the three seals. These straps had
originally been used to provide roof support during the seal construction and had yet to be removed from
the area. An acetylene cylinder and cutting torch were being used to cut the metal straps, but the
investigation found that continuous monitoring of methane levels in the area was not being practiced by
the mine personnel. This torch was determined to be the ignition source of the explosion, although the
explosion occurred behind one of the mine seals (Light, et al., 2007).
6
|
Virginia Tech
|
Table 2-1. Explosion history in U.S. underground coal mines related to mine seals (starting in 1986)
Mine Location Date General Size of Seal Type Damage from Cause of Explosive Ignition Estimated Explosion Source
Discovered Sealed Area Explosion Mix Source Pressure
Roadfork Pike Oct. 7, 1986 Several room-and- 16 inches think 4 destroyed and 4 Recently sealed Spark Unknown (South, 1986)
No. 1 County, KY pillar panels (masonry blocks) damaged seals area from roof
fall
Blacksville Monongali Mar. 19, 1992 Production shaft Shaft cap (steel) Shaft cap destroyed Recently sealed Welding 6900 kPa (1000 psi) (Rutherford,
No. 1 a County, area area activities Painter, Urosek,
WV Stephan, &
Dupree Jr,
1993)
Oak Grove Jefferson 1994 Several square miles Unknown 3 destroyed seals Leakage Unknown Unknown (Zipf, Sapko, &
County, Al Brune, 2007)
Mary Lee Walker April, 1994 Several square miles Unknown 1 destroyed and 2 Leakage Lightning 34 kPa (5 psi) (Checca &
No.1 County, AL damaged seals Zuchelli, 1995)
Gary No. Wyoming Jun. 16, 1995 Several square miles 4 feet think 1 damaged seal Leakage Lightning 35-85 kPa (5-7 psi) (Sumpter, et
50 County, (Tekseal) or roof al., 1995)
WV fall
Oasis Boone May 15, 1996 Several square miles 2.3 feet thick 3 destroyed and 1 Leakage Lightning Less than 138 kPa (20 (Ross Jr &
County, (Micon 550) damaged seal or roof psi) Shultz, 1996)
WV fall
Oasis Boone Jun. 22, 1996 Several square miles 2.3 feet thick Unknown Leakage Lightning Unknown (Ross Jr &
County, (Micon 550) or roof Shultz, 1996)
WV fall
Oak Grove Jefferson Jul. 9, 1997 Several square miles 6 feet think 5 destroyed seals Leakage Lightning Exceeded 138 kPa (20 (Scott &
County, Al (Tekseal) psi) Stephan, 1997)
Big Ridge Saline, IL Feb. 1, 2002 Several square miles 4 feet thick 1 seal destroyed Recently sealed Unknown Unknown (Kattenbraker,
(Fosroc) area 2002)
Sago Upshur Jan. 2, 2006 1 room and pillar 40 inches thick 10 seals destroyed Recently sealed Lightning Exceeded 642 kPa (93 (Gates, et al.,
County, panel (Omega Blocks) area psi) 2006)
WV
Darby Harlan May 20, 2006 1 room and pillar 16 inches thick 3 seals destroyed Recently sealed Oxygen/ Exceeded 152 kPa (22 (Light, et al.,
County, panel (Omega) area acetylene psi) 2007)
WV torch
Pleasant1 Randolph Jul. 1, 2012 Unknown Unknown Water traps blown Recently seal area Unknown Unknown (Mine Safety
Hill County, out from seals and Health
WV Administration,
2012)
1 On-going investigation. Full Report unavailable.
7
|
Virginia Tech
|
2.1.4 Early History of Seal Standards
The earliest history of seal regulation in the United States occurred with the approval of an
amendment to the Mineral Leasing Act of 1920, on April 30th 1921. This amendment (Sec. 104. (a))
required that โall connections with adjacent mines, if not used for haulage, escapeways, exits, or airways,
shall be sealed with stoppings which shall be fireproof and built to withstand a pressure of 50 pounds per
square inch (345 kPa) on either sideโฆโ. At the time, the biggest concern and reasoning of the law was to
prevent an explosion in one mine from propagating into a neighboring mine. The 50 psi standard written
into the law was determined by the โgeneral opinion of men experienced in mine-explosion
investigations.โ In 1931, George Rice, along with the Bureau of Mines and the Bureau of Standards,
examined typical concrete seals used in underground coal mines. These typical seals were 2 feet thick and
are constructed of reinforced concrete anchored into the roof and ribs of the mine. These โtypical sealsโ
were tested over a wide range of heights and widths, while keeping the thickness to width ratio similar.
The test also included evaluating the use of coal as buttresses for the seals (Rice, Greenwald, Howarth, &
Avins, 1931).
For nearly 50 years, 50 psi seals and Riceโs work were accepted practice in the mining industry.
In 1969, the Federal Coal Mine Health and Safety Act was approved, and required that abandoned areas
of a coal mine had to be either ventilated or sealed with explosion-proof bulk heads. However, as of 1969,
no one had adequately defined โexplosion-proofโ or determined what type of forces would be exerted on
a bulkhead during an explosion. In 1971, D.W. Mitchell, of the Pittsburgh Mine and Safety Research
Center (Bureau of Mines) examined the forces that could be expected from explosions behind mine seals,
at developing a design standard for this explosive force, and at examining the effect of seal leakage.
Mitchel concluded, based on looking at test explosion results from the Bruceton Experimental Mine in
Pittsburgh and from international testing, that explosive pressure seldom exceed 20 psi (Mitchell, 1971).
However, this conclusion was based on the assumption that the explosion was limited to the amount of
explosive atmosphere on the active side of the seal. Mitchellโs assumption did not consider the
containment of an explosion within the sealed area. In addition to recommending 20 psi seals, Mitchell
also looked into the leakage of methane from seal material into the active mine and the potential hazards
that could occur. Again, Mitchell did not consider the effect of air leaking into the sealed area to form an
explosive mix behind the seal (Zipf, Sapko, & Brune, 2007).
Testing on different types of seals and seal materials continued following 1971, but it wasnโt until
1992 that the Code of Federal Regulations had a definitive design specification for explosion-proof seals.
In 1991 the U.S. Bureau of Mines reviewed the design and testing of seals made from concrete blocks and
a cementitous foam to meet the 20 psi standards. In 1991, N.B. Greninger and a team from the Bureau of
Mines formally approved designs for cement block seals and cementitous foam seals (Greninger, Weiss,
Luzik, & Stephan, 1991). Later, in 1997, C.R. Stephan reported on additional types of seals โ Omega
384 blocks, Crib blocks (wooden), and Micron 550 โ that also passed the 20 psi strength requirements
(Stephan & Schultz, 1997).
2.1.5 MINER Act and New Seal Standards
The 20 psi seal strength requirements remained in place until 2006, when both the Sago and
Darby mines experienced a total of 17 fatalities. The cause of both of these disasters was determined to be
a build up an explosive atmosphere behind recently built seals mixed with an ignition source (lightning
and an oxygen/acetylene torch). When the explosions occurred, the 20 psi seals failed causing the
explosions to propagate into mine. In both cases, the failed seals were built to approved 20 psi standards
and the estimated explosive forces behind the sealed area was estimated to be 93 psi at the Sago mine and
22 psi at the Darby mine. Following these two incidents, MSHA acknowledged that explosive magnitudes
greater than 20 psi can develop in sealed areas due to methane or coal dust explosions (Gates, et al., 2006)
and (Light, et al., 2007). Two months after the Darby explosion, MSHA posted Program Information
Bulletin (PIB) No. P06-16. This bulletin formally increased the minimum seal strength requirement to 50
8
|
Virginia Tech
|
psi. The same bulletin also required new alternative seals to be designed and certified by a professional
engineer. On May 22, 2007, MSHA published Emergency Temporary Standards (ETS) concerning the
sealing of abandoned mine areas. These standards were made based on NIOSH recommendations, mine
explosion investigations, in-mine seal evaluations, and other reports and established a three-tiered
approach for minimum seal strength based on explosive overpressure: 50 psi, 120 psi, and greater than
120 psi (Kallu, 2009). On April 18, 2008, MSHA published its final ruling on sealing abandoned mine
areas, and can be found in the 30 Code of Federal Regulations Part 75 Section 335(a) (30 CFR
ยง75.335(a)).
The three-tiered approach of seal strength found in 30 CFR ยง75.335(a) is also divided into
general sealed areas and longwall crosscut seals. In monitored and inertly maintained sealed areas, a
minimum overpressure of 50 psi must be maintained for four seconds and then instantaneously released
for general sealed areas. For longwall crosscut seals, this overpressure must be maintained for 0.1
seconds. Most commonly, the sealed area is not monitored and does not remained inert. In these cases, the
seals must be built to maintain a minimum overpressure of 120 psi for 4 seconds for general seals and 0.1
seconds for crosscut seals. There are an additional three circumstances where the seal strength must be
designed to withstand overpressures greater than 120 psi: the sealed area is likely to contain a
homogenous mixture of methane between 4.5 and 17.0% and oxygen exceeding 17.0%, pressure piling
could result in overpressures greater than 120 psi, or other conditions are encountered, such as the
likelihood of a detonation in the area to be sealed (Mine Safety and Health Administration, 2011).
2.1.6 Current Approved Seals
Currently, there are 20 MSHA approved mine seals that have been submitted and accepted for
both 50 psi and 120 psi pressures. The approval process required by MSHA requires manufactures of seal
materials to provide specific designs on not only the physical properties of the material, but also the
construction specifications, quality control, and full testing design and results for the submitted seals (30
CFR ยง 75.335b). A list of the currently approved mine seals in the U.S. can be seen below in Table 2-2.
Table 2-2. Approved 50 psi and 120 psi seals by the Mine Safety and Health Administration
Maximum Entry
Manufacturer Seal Type Dimensions (height by
width)
Overpressure of 50 psi
Strata Plug Seal 16' by 40'
Minova Main Line Teksealยฎ 30' by 30'
MICON Gob Seal 20' by 28'
MICON Main Line Seal 20' by 28'
JennChem Gob Isolation J-Seal 30' by 30'
Overpressure of 120 psi
Strata Plug Seal 16' to 100'
Orica Main Line Teksealยฎ 30' by 30'
BHP Billiton Main Line Plug Seal 20' by 26'
Precision Mine Repair 8x40 Concrete Seal 8' by 40'
Minova Gob Isolation Teksealยฎ 30' by 30'
MICON Mainline Hybrid Seal 20' by 28'
Precision Mine Repair Concrete Seal 6' by 40'
9
|
Virginia Tech
|
Precision Mine Repair Concrete Seal 10' by 40'
Precision Mine Repair Concrete Seal 12' by 40'
Minova Main Line Teksealยฎ 30' by 40'
MICON Mainline Hybrid II Seal 20' to 28'
Gob Isolation Hybrid II
MICON Seal 20' to 28'
MICON Mainline Hyrbid III Seal 20' to 28'
Strata StrataCrete Seal 12' to 40'
JennChem Mainline J-Seal 30' to 30'
Out of the list of approved mine seals, 70% involve some form of pumpable cement or shotcrete
to the support the structural integrity of the seal. Pumping of both high-density cement and aerated
cellular cement can produce possible integrity issues after the original mixing, due to the velocity of the
pump and shearing effects. These issues can be seen in the form of voids, microstructural fractures, and
density changes (Narayanan, Ramamurthy, & K., 2000) (Ramamuthy, Nambiar, & Ranjani, 2009) (Rio,
Rodriguez, Nabulsi, & Alvarez, 2011). Factors such as temperature and pumping distance also have the
possibility of effecting the predictability of the flow of cement (Rio, Rodriguez, Nabulsi, & Alvarez,
2011). Some of the factors that affect the rheology, or flow of โsoft solidsโ are the mixer type, the mixing
sequence, the mixing duration, temperature, distance pumped, and composition of the mix (Ferraris, de
Larrard, & Martys, 2001). All of these compounded factors make the variability and potential for
structural issues for seals made with pumpable cement fairly high.
2.2 Non-Destructive Testing Methods
2.2.1 NDT assessment of concrete structures
Non-destructive testing (NDT) is a term generally applied to the evaluation of a structure or
material without intrusive measures. While visual inspections have been common place in evaluating the
condition of concrete structures, NDT techniques have become the preferred method for evaluating the
condition of the material beneath the surface of a structure. One of the unique qualities of the NDT field is
that many of the techniques used in the evaluation of concrete structures originate from other disciplines:
health physics, medicine, geophysics, laser technology, nuclear power, and process control (Mix, 1987).
One of the first uses of an NDT method to look at the integrity of concrete was the invention of the
Schmidt hammer by Swiss engineer Ernst Schmidt. The Scmidt hammer is used to evaluate the surface
hardness of cement structures but struggles to evaluate the cement type or content (Bungey & Millard,
1996), two factors important in the integrity of the structure. Other factors that influence the Schmidt
hammerโs ability to evaluate the strength of concrete are smoothness, carbonation, and moisture condition
(Cantor, 1984). While the Schmidt hammer is far from a robust NDT technique for evaluating cement and
concrete structures, it was one of the first patented NDT technique for concrete (United States of America
Patent No. US 2664743 A, 1951).
2.2.2 NDT methods
From the mid-1940โs to today, there have been many advancements in the field and new NDT
methods that have become commonly used in the evaluation of concrete structures, along with other civil
structures such a pipes, coatings, and welds (Cantor, 1984). Halmshaw has separated NDT testing
methods into five distinct or major methods, radiology, ultrasonic, magnetic, electrical, and penetrate, and
within each of these groups there are many different testing method that can be used for a wide variety of
structures (Halmshaw, 1987).
10
|
Virginia Tech
|
2.2.2.1 Radiology
In terms of testing the integrity and condition of concrete structures, radiology has been
developed into three different methods: X-ray radiography, gamma ray radiography, and gamma ray
radiometry (Bungey & Millard, 1996). X-ray radiography, an NDT method most commonly associated
with the medical field (Mix, 1987), has been used in laboratory tests primarily to examine the internal
structure and condition of concrete, but has rarely been used in field tests due to the high risk of
backscatter radiation from X-rays reflected off the surface. Gamma ray radiology is similar to X-ray
radiography in that an internal picture of the structure is created by the straight-line passage of rays
through the structure and onto a photography layer. Any void space or high density particle within the
material will be seen on the photographic layer or radiograph (Halmshaw, 1987). Gamma ray radiometry
measures the backscatter of gamma radiation as it passed from one side of the structure to another. As the
gamma rays pass through the concrete, some rays are absorbed, some pass through completely, and other
are scattered by the concrete. The backscatter is the measure of the amount of radiation scatted by the
structure, and can be used to measure the thickness and density of concrete structures (Bungey & Millard,
1996).
2.2.2.2 Ultrasonic
Ultrasonic waves are commonly used to evaluate the uniformity of structures and to estimate
strength (Malhotra, 1984). Ultrasonic waves (greater than 20 kHz) are electronically generated and
applied to the sample. The time of travel and reflective nature of the waves as they travel through the
structure are measured using a circuit consisting of a pulser/receiver connected through cables to the
transmitting transducer, which is placed on the surface of the object in question. A receiving transducer is
then placed on the same surface and is connected back to the pulser/receiver through another series of
cables, and recorded using a data system (Schmerr Jr. & Song, 2007). The measured velocities of these
waves are primarily dependent on the elastic properties of the material, which, in concrete typically runs
between 3.5 and 4.8 km/s (Bungey & Millard, 1996). Areas within the material that contain fractures and
discontinuities often reflect some of the ultrasonic energy back to the receiver, resulting in a quicker
travel time than waves reflected from the opposing side of the sample. Small voids and reinforcement
material with elastic properties different from the concrete structure can also be detected using a pulse
ultrasonic NDT method (Halmshaw, 1987) (Schickert & Krause, 2010).
2.2.2.3 Magnetic
Magnetic NDT methods are primarily focused at evaluating materials that possess large amounts
of iron, nickel, and cobalt (ferromagnetic materials) that are strongly attracted to one another when
magnetized. When a specimen containing a large amount of ferromagnetic materials becomes
magnetized, both surface and subsurface flaws can be observed by the distortion of the magnetic flux
field. These fields can be detected by magnetic tape and field-sensitive detector probes (Halmshaw,
1987). Eddy current and leak flux are the two main magnetic NDT methods. Eddy current testing involves
using alternating magnetic fields to create eddy current that, if any flaw is present in the structure to affect
the conductivity, can be detected. Flux leakage uses either permanent magnets or DC electromagnetic
fields to create flux fields to detect discontinuities or cracks in the structure that cause leakage of the flux.
Both dry and wet magnetic particles are also used to detect structural issues and flaws. By applying these
particles to ferromagnetic structures, one can observe surface cracks based on the presence of these
particles in cracks following their removal from the surface of the structure (Mix, 1987). Typically
magnetic NDT methods are used to identify the location and condition of metal used in reinforced
concrete structures (Malhotra, 1984).
2.2.2.4 Electrical
Eddy current monitoring is a cross-over technique that applies to both magnetic and electrical
NDT methods. As previously mentioned, the resultant currents created by generating eddy currents
11
|
Virginia Tech
|
through alternating current through coils on the surface of the structure can be affected by many structural
variables. These variables include flaws, size of the specimen, electrical conductivity of the structure, and
magnetic permeability. Other electrical methods include the measurement of electrical resistivity (which
can determine cracks, porosity, sample dimensions, and lattice structure of the material), electrostatic
field generation (for detection of cracks in porcelain coatings), and triboelectric testing (for detection of
variation in metal composition based on the voltage produced by friction effects between two metals)
(Halmshaw, 1987). In terms of concrete evaluation, electrical NDT methods can be used to determine
concrete thickness, location and condition of metal reinforcements, and the moisture content of the
structures (Malhotra, 1984).
2.2.2.5 Penetrate
One of the oldest NDT techniques, penetrant flaw detection is also one of the easiest methods to
detect surface-breaking discontinuities. The earliest example of penetrant flaw detection was referred to
as the oil and whiting technique. Oil would be applied to the surface of a specimen and allowed to soak
in. After removing the excess oil from the surface, calcium carbonate powder would be applied to the
surface of the structure. Any surface cracks or discontinues would become visible as oil would migrate to
the powder or whiting, leaving a reduction in whiteness on the surface of the cracked area (Halmshaw,
1987). In 1941 fluorescent and visible dyes were added to the penetrant by Robert and Joseph Switzer,
greatly improving the technique (Mix, 1987). Today oils have widely been replaced with fluorescent
penetrants, which become visible under ultraviolet (UV) light (DiMambro, Ashbaugh, Nelson, &
Spencer, 2007). Penetrant testing can be used on a wide range of materials, but typically metals, alloys,
ceramics, and plastics. A reputation of being unreliable has often been associated with this method but is
frequently attributed to improper pre-cleaning processes (Halmshaw, 1987).
2.2.3 Other methods
Another electromagnetic NDT method, ground penetrating radar (GPR) or electromagnetic
reflection can also be used to evaluate concrete structures. However, unlike magnetic NDT methods, the
materials that are ferromagnetic cannot be investigated using GPR. Electromagnetic pulses are admitted
from a transmitter antenna and then recorded by a receiver antenna. As the electromagnetic energy travels
through the structure, and when it comes in contact with an interface part of the energy, it will be
transmitted and part will be reflected. Flaws are typically detected by comparing the resistance of the
electromagnetic energy or permittivity from one material to another. Flaws such as cracks and voids will
contain air pockets that will have different permittivity values than the concrete. GPR can be used to
determine the thickness of concrete structures and the location of reinforcement material and void spaces,
as well as measure material properties such as humidity and air content (Hygenschmidt, 2010). Because
water is a good absorber of electromagnetic energy, GPR is also well suited for determining water content
of concrete structures (Cantor, 1984).
As stresses are applied to certain structures, elastic acoustic waves are discretely produced within
the structure, hence this NDT method referred to as the acoustic wave method. These acoustic wave
events can be measured on the surface of the structure by transducers and these transducers can be used to
locate regional cracks or sliding planes within the structure and predict failure of the structure if high
stresses are present. Similar to the study of earthquakes, the acoustic energy produced by these structures
can range from 0.001-10 Hz, and can be continuously monitored (Halmshaw, 1987). One consideration
with the acoustic emission NDT method is that structures that experience a specific load will often
produce acoustic energy, but will then cease emitting energy until the specific load is exceeded, even if
the structure is unloaded and the original stress is reapplied. This phenomena is referred to as the โKaiser
effectโ and makes acoustic emission an ideal NDT method for determining and predicting failure criteria
of structures (Mix, 1987). For concrete structures the Kaiser effect has been observed over unloading
12
|
Virginia Tech
|
durations of approximately two hours, and predicted that over long time periods it is possible that the
autogenic โhealingโ of concrete structures will negate the Kaiser effect (Bungey & Millard, 1996).
Another NDT method that is specific to concrete and cement structures is the measurement of air-
permeability through the structure. While the main property being measured is the permeability of the
structure, other properties, such as microcracks and porosity, can also be determined (Hansen, Ottosen, &
Peterson, 1987). Permeability is determined within the structure (usually through laboratory tests) by
injecting an inert gas such as nitrogen at a steady flow rate into the sample and measuring the pressure
differential and flow rate of the gas. Findings from Choinska, Khelidj, Charzigergiou, and Pijaudier-
Cabot saw the air permeability of concrete samples decrease with the original loading of stresses to the
samples. However, as micro-cracking begins to take place in the sample the permeability increases and
increases further after the sample is unloaded. Temperature has also been seen to affect the permeability
of concrete and, due to the thermal expansion of air within the pore space of the structure as the
temperature increases in a sample, so does the permeability (Choinska, Khelidj, Chatzigeorgious, &
Pijaudier-Cabot, 2007). Permeability of concrete structures has also been used to characterize the
moisture condition of the sample (Abbas, Carcasses, & Olliver, 1999) as well as the additive components
that might be part of a cerementous mix, such as fly ash, silica fume, limestone filter, and granulated blast
furnace slag (Hui-sheng, Bi-wan, & Xiao-chen, 2009).
2.3 Impact-Echo Sonic Waves
2.3.1 Theory
Like the ultrasonic NDT testing method, the impact-echo NDT method relies on the movement of
energy waves through a structure. The impact-echo method was recently developed in the mid 1980โs by
what is now the National Institute for Standards and Technology (NIST), specifically as a NDT method
for concrete. This method evaluates the vibrational response of the concrete structure, as some physical
impact is applied to the surface. Waves propagate through the structure after impact (usually with a
hammer or metal device) and are reflected off the boundaries between the top and bottom of the sample,
and also multiple reflection occur, a resonance phenomenon occurs that, through the resulting frequency
spectrum of the sample, can be used to determine the thickness of the sample (Abraham & Popovics,
2010). The frequency of the sample is usually measured by accelerometers or geophones that record the
vibrations of the sample in the form of voltage. A Fourier transform (see next sub-section) is then needed
to produce the frequency spectrum of the resonance in the sample. The basic layout and sample frequency
spectrum of the impact-echo test can be seen below in Figure 2-3.
13
|
Virginia Tech
|
Figure 2-3. General layout and frequency response of solid (left) and voided (right) concrete samples using
impact-echo NDT
Impact-echo methods have many applications as an NDT method for concrete structures
including determining the thickness of the structure, internal defect detection, and void detection
(Abraham & Popovics, 2010). The impact-echo method has also been used to evaluate the loss of contact
between the metal reinforcement and the concrete, and the condition of the reinforcement material. The
biggest difference between the ultrasonic method and the impact-echo, besides the instrument used for the
energy source, is that ultrasonics will only provide information on properties that exists along the ray path
traveled by the wave. Because impact-echo looks at frequency responses, the NDT method can be used to
evaluate the entire structure. The disadvantage of this process is that impact-echo NDT methods have
difficulty in identifying exact locations of defects and voids (Malhotra, 1984). This problem, however,
can be solved by multiple samples and multiple receivers on the surface of the sample (Abraham &
Popovics, 2010).
2.3.2 Impact-Echo and FFT
In the impact-echo method, the impact created on the surface of the structure creates both P and S
waves, although the P waves are the primary focus of the NDT method. The displacement of the P waves
is larger than the S waves, therefore the P waves are more likely to reflect off boundaries within the
structure and create the resonance phenomena (Cheng & Sansalone, 1993). The displacement observed by
the geophone or transducer records data as time-domain signal (voltage measured over time). That being
said, the most significant contribution to the impact-echo NDT method came in 1986 when Carino,
Sansalone, and Hsu observed that flaw detection on concrete structure was possible by transforming the
time-domain signal to frequency-domain (amplitude measured over frequency) by using a fast Fourier
transform (FFT). From the observed frequency spectrum of lab and field samples, Carino, Sansalone, and
Hsu were able to develop the equation seen below (equation 2-1) to determine the approximate thickness
between the surface and a flaw within the structure creating the reflection (Carion, Sansalone, & Hsu,
1986).
๐ =
๐ถ๐๐;
(2-1)
2๐
14
|
Virginia Tech
|
where T is the depth of the reflection (bottom of structure or flaw),
C is the natural P wave speed through the thickness of the concrete structure, and
pp
f is the frequency observed of the P wave reflection
The use of FFT analysis for the impact-echo NDT method has been the standard since 1986 and
has been used in both laboratory and field tests to observe delaminations in the concrete structure
(McCann & Forde, 2001), correlate the frequency spectrum with the strength characteristics of concrete
(Cho, 2003), and even the corrosion damage of rebar found in reinforced concrete structures (Laing & Su,
2001). It has also been commented that the impact-echo may determine the porosity and water content of
structures (Carino, 2001). When compared to other NDT methods, Krause, et al. commented that the
impact-echo method has shown similar ability to detect flaws within the subsurface of concrete structures,
as well as the thickness of the structure itself. Some of the other NDT methods used by Krause, et al.
included radar and ultrasonics (which used six different processing techniques) (Krause, et al., 1997).
Overall, the impact-echo NDT method, specifically with the development of the FFT analysis, provides a
cheap, efficient, and fairly accurate method to evaluate the location of boundaries with a concrete
structure, as well as other physical properties necessary for structural integrity.
2.3.3 Fourier transform
A Fourier analysis is often referred to as โfrequency analysisโ and is the mathematical science of
transforming any given function as a super position of sinusoid, each possessing a distinct frequency. A
sinusoid is the linear combination of the functions cos2๐๐ ๐ฅ and sin2๐๐ ๐ฅ, where x is a real variable and s
is a nonnegative, real constant, or the frequency of the sinusoid. The rough equation for most Fourier
analyses can be seen below in equation 2-2.
๐(๐ฅ)= โ (๐ด (๐)cos2๐๐ ๐ฅ+๐ต (๐)sin2๐๐ ๐ฅ); (2-2)
๐ โ๐น๐ ๐ ๐
where F is a naturally occurring set
f
A(f) and B(f) are the coefficients of function F
s s
The equation above represents the most reduced, general function of a Fourier analysis (Stade,
2005). In order to take a series of data and evaluate the frequency spectrum of the data, a Fourier
transform must take place, of which there are many. Primary, a form of discrete Fourier transform (DFT)
is used to take data and continually produce the corresponding frequency spectrum of the data. This is
called a fast Fourier transform (FFT). DFT analysis and FFT analysis produce the same results, but with
the advancements of computational computer power in recent years, the FFT can reduce computational
time by a factor of 200 when the number of data points is only 1024. Because of this, FFT is primarily
used for larger data sets or continuous data (Walker, 1996). By taking the basic equation in 2-1 and re-
expressing the function in exponential form using equations 2-3 and 2-4, it can eventually be reduced to
final Fourier transform (๐(๐)) equation seen below in equation 2-5 (Stade, 2005).
๐๐2๐๐ ๐ฅ+๐โ๐2๐๐ ๐ฅ
cos2๐๐ ๐ฅ = (2-3)
2
๐๐2๐๐ ๐ฅโ๐โ๐2๐๐ ๐ฅ
sin2๐๐ ๐ฅ = (2-4)
2๐
where e is the base of the natural logarithmic,
j is the imaginary complex number of โโ1
๐(๐)= โซโ ๐(๐ฅ)๐โ๐2๐๐ ๐ฅ๐๐ฅ (2-5)
โโ
15
|
Virginia Tech
|
FFT analysis has been used in a wide array of fields, from mathematics to finances, and even in
vibration analysis of mechanical structures. A series of displacement, velocity, and acceleration
transduced has been used to evaluate the vibrations of parts to help with the prediction of mechanical
failure (Ramierz, 1985). Chakrabarti, in 1987, rewrote the FFT equation to better apply to wave energy
spectral density. This equation, 2-6, can be seen below and serves as an analog to the total energy of the
elastic waves through concrete as part of the impact-echo method. To evaluate the entire spectrum (๐(๐ค))
of wave energy, the equation 2-7 is derived. The resulting spectrum is used to evaluate energy density
along different frequencies for the data set (Rahman, 2011).
๐ธ = 1 ๐๐โซโ |๐(๐ก)|2๐๐ก (2-6)
2 โโ
where E is the total energy of the wave (per unit surface area)
ฯ is the density
g is the acceleration due to gravity
ฮท is the wave elevation
t
๐(๐ค)= 1 |โ๐ ๐(๐โ๐ก)๐๐2๐๐(๐โ๐ก)โ๐ก|2 (2-7)
๐=1
๐๐
where T is the total data length
s
N is a subsection of the total data points
ฮt is a constant time increment over N
2.4 Tracer Gases
2.4.1 Support of Ventilation Characterization
The ventilation design and support of underground mining activities is perhaps the most
important operation that takes place in an underground mine. While the initial design of these airways is
important, constant surveys are necessary to ensure the quantity and quality of air in the mine is up to
mandatory requirements. These surveys typical address the quantity, pressure, temperature, and mixture
of gases present in the mine, using a variety of methods. Quantity surveys are typically completed by
measuring the cross-sectional area of mine airways, and then corresponding velocity moving through the
airway using anemometers, pilot static tubes, or velometers (Roberts, 1960). Pressure surveys are
completed by using a combination of pilot tubes and pressure gages, or barometers, and are done to
determine the pressure drop in airways due to friction, shock, and increase in kinetic energy (Hall, 1981).
Temperature surveys take place in order to determine the density of the air, the humidity, and also the
cooling power of the ventilation system. Both dry and wet bulb (dry temperature plus the evaporative rate
of air) temperatures are measured in underground mines by using sling psychrometers or whirling
hygrometers (Hartman, Mutmansky, Ramani, & Wang, 1997). Air quality surveys typically concern the
composition of the air underground, specially methane, carbon dioxide, carbon monoxide, and other gases
and dust. The quantification of these gases can be done underground using portable devices such as stain
tube chemical sensors or infrared sensors, but are typically done by taking samples underground and
transporting to a laboratory station or portable gas chromatograph (Timko & Derick). Methane is one
underground gas that must be monitored almost continuously as it is the most commonly occurring
combustible gas found in underground mines. The monitoring is done by using methanometers that can
accurately monitor methane level to ยฑ0.1% (Hall, 1981).
16
|
Virginia Tech
|
Tracer gases are a technique used to determine ventilation characteristics, specifically the
quantity of air, without having to measure the cross-sectional area around the airway, which has an
inherent error in the measurement. By releasing a known, non-reactive chemical gas with no background
presence in the mine, no toxicity, combustibility, or adverse health effects, one can measure the small
quantities of the tracer present (less than ppm) to make calculation of the quantity of air present in the
mine (Hartman, Mutmansky, Ramani, & Wang, 1997). Tracer gases have been used over the last half-
century to more accurately map the flow and quantity of air moving in underground mines. The origins of
tracer studies in mines began with simple observations of chemical smoke (stannic chloride, titanium
tetrachloride, and pyrosulphuric acid) or dust to visual detection and quantify the movement of airflow in
underground mines. There early methods were limited to slow moving airways and were soon replaced by
introducing non-naturally occurring chemicals (nitrous oxide) to the airways and quantifying the amount
of chemicals downstream of the release point using analytical chemistry techniques (infra-red analysis)
(Roberts, 1960). Sulfur hexafluoride (SF ) quickly replaced nitrous oxide and other chemicals due to the
6
ease of analysis to measure low concentrations and ease of transportation. Other chemical tracers were
difficult to detect at lower concentrations, and while radioactive tracers were easier to detect at low
concentrations, the transportation and handling of radioactive tracers posed health risks to workers and
surveyors in the mine (Thimons & Kissell, 1974). In recent years perfluorinated tracers (PFT), such as
perfluoromethylcyclohexane (PMCH), have been used in place of or in conjunction with SF to survey
6
mine ventilation networks (Jong, 2014).
There are two commonly used tracer gas release methods for ventilation analysis in underground
mines: a tracer continuously released and monitored in the air way, or a known quantity of the tracer is
released and monitored downstream. The advantage of the first method is that once mixing and
equilibrium is met a single sample can be taken to determine the quantity of air at the sampling station,
and while the second method requires much less tracer to be purchased and released, it does require either
continuous or extremely frequent sampling to determine the airflow (Thimons & Kissell, 1974). The
equations for determining airflow (Q) (m3/s) using a constant tracer release method and single release
method can be seen below in equations 2-8 and 2-9, respectively.
๐ =
๐๐
(2-8)
๐ถ
๐ =
๐๐
๐๐
๐๐
(2-9)
โซ ๐๐ 0๐๐ถ๐๐๐ ๐ถ๐๐ฃ๐(๐๐โ๐0)
where Q is the feed rate of the trace (m3/s)
g
C is the concentration of the tracer gas (m3/m3)
ฯ is the time at which the tracer is first measureable (min)
0
ฯ is the time at which the tracer is no longer measureable (min)
f
C is the concentration at time ฯ (m3/m3)
ฯ
C is the average concentration taken over the time (ฯ -ฯ ) (m3/m3)
avg f 0
2.4.2 Sulfur Hexafluoride (SF )
6
As previously mentioned, since the early 1970โs SF has been the mining industryโs tracer gas of
6
choice. A decade earlier, SF was primarily being used for atmospheric tracer studies (Turk, Edmonds, &
6
Mark, 1967) and eventually was determined to be a viable substitute for carbon tetrachloride (CCl ) as a
4
fresh and oceanic water tracer (Bullister, Wisegarver, & Menzia, 2002). SF has al been used in
6
ventilation studies of buildings and fume hoods, with the ductwork of the homes acting similarly to
airways in underground mines (Drivas, Simmonds, & Shair, 1972). Originally developed as an electrical
insulator for circuit breakers, cables, mini-power stations, and transformers due to the banning of
polychlorinated biphenols, SF is an ideal tracer due to its physical properties. SF is inorganic,
6 6
17
|
Virginia Tech
|
nonflammable, odorless, colorless, and nontoxic gas, typically described as inert. SF iscapable of being
6
detected at low concentration levels due to its nature as a good electron scavenger and high breakdown
strength. SF , due to the shielding of the sulfur atom by the six fluorine atoms, is impeded from having
6
kinematic reactions to water, alkali hydroxides, ammonia, or strong acids, making it a fairly unreactive
gas (Nakajima, Zemva, & Tressaud, 2000).
In the mining industry, SF has been used in both coal and metal/non-metal underground mines to
6
look at airflow patterns, leakage rates, diffusion rates, and even been used to confirm physical survey
tools, as Stokes, Kennedy, and Hardcastle proved by calculating the volume of a single stope in Ontario,
Canada by quantifying the amount of airflow through the stope and average residence time, both observed
by continuous SF monitoring (Stokes, Kennedy, & Hardcastle, 1987). The 1974 U.S. Bureau of Mineโs
6
report on the gaseous tracer in ventilation surveys using SF was one of the first documented reports of
6
SF successfully being used and indorsed by a government body in the U.S. The report showed how
6
releasing SF in the Bureauโs Safety and Research Mine in Bruceton, PA and monitoring of the
6
concentration could be used to determine the airflow moving through the airways. The report also
documented a field test conducted in an underground limestone mine where air velocity was measured
using SF tracer techniques and compared to traditional smoke tests and anemometers. The tracer gas
6
technique compared favorably (Thimons, Bielicki, & Kissell, 1974). SF ,as a tracer, has been used to
6
monitor leakage through and around permanent mine stoppings (seals) at lower levels (less than 20
ft3/min) than observed before (Matta, Maksimovic, & Kissell, 1978). By sampling for SF across different
6
areas of an airway, it is also possible to determine how well air is being mixed or if there are any stagnant
or eddy zones located along the airway (Kissell & Bielicki, 1974). SF has been an invaluable tool used in
6
the mining industry over the last 40 years for its ease of use, sensitivity, ability to provide information in
traditionally inaccessible regions of the mine, and the amount of information that can come from
monitoring SF concentrations.
6
2.4.3 Perfluorinated Tracer Compounds (PFTs)
While SF has been the standard mine-related tracer gas since the early 1970โs another group of
6
tracers have become more commonplace in terms of structural ventilation studiesโ perfluorocarbon
tracers (PFTs) (Leaderer, Schaap, & Dietz, 1985). Perfluorocarbon tracers have been predominantly used
in atmospheric tracer studies, where a small amount of tracer is released in the atmosphere and monitored
to help confirm atmospheric dispersion models that have been created to simulate air pollutant behavior
(Ferber, et al., 1980). Perfluorocarbon tracers are stable, non-toxic, organic compounds that typically
consist of an alkane group of six carbon atoms, surrounded by a combination of fluorine atoms and more
carbon atoms in the form of trifluoromethyl groups (Kirsch, 2004). One of the advantages of PFTs
compared to SF is that, due to the ever increasing sensitivity of tracer detection and natural background
6
abundance of tracers, most PFTs have a much lower background than SF . For example, when compared
6
to perfluoromethylcyclohexane (PMCH) (C F ) SF is approximately 250 times more abundant than
7 14 6
PMCH (Ferber, et al., 1980). PMCH, along with many other PFT tracers, is liquid at standard temperature
and pressure, yet volatile. To use this property as an advantage, Brookhaven Nation Lab (BNL) developed
passive release sources that house a small amount of liquid PFT, which then becomes a vapor and is
slowly released into the ventilation network through a permeable silicone rubber plug. This produced a
constant, temperature dependent release of the PFT into the network. Using multiple tracers, BNL was
able to map complex ventilation networks found in modern HVAC (heating, ventilation, and air-
condition) systems (Dietz, Goodrich, Cote, & Wieser, 1986). It is worth noting, as Sherman did, that the
while PFTs are extremely useful and applicable, there is a certain amount of uncertainty and error that
comes with using integrated PFTs for building air flow calculations, compared to real-time measurement
systems (Sherman, 1989).
18
|
Virginia Tech
|
There has been virtually no wide use of PFTs to assist in mine ventilation surveys, but novel work
has recently been completed by a research group at Virginia Tech who used PMCH along with SF to
6
characterize the airflow around a longwall panel, across the face, and through the gob of a western U.S.
underground coal mine (Jong, 2014). Also, BNL and the New York City police department recently
completed an airflow study of the New York subway system using PFTs (Frazier, 2013). Based on the
subway study, and series of building ventilation BNL has conducted, it is relatively safe to assume there
is room for the use for PFTs in underground mine ventilation studies.
Another interesting use of PFTs is in the field of carbon sequestration and CO leakage
2
monitoring. In recent studies, PFTs have been injected along with CO in sequestration studies in coal
2
seams (Ripepi, 2009), saline aquifers (Pruess, et al., 2005), and depleted oil reservoirs (Wells A. W., et
al., 2007). In many of these studies PFTs are monitored at offset wells nearby the injection well of the
CO , but soil testing is also done monitor for PFTs indicating potential CO leaks through the overburden.
2 2
This movement of PFTs through long distances and through solid layers of material, indicates the
potential for these tracers to move through solid structures, similar to SF through underground mine seals
6
(Matta, Maksimovic, & Kissell, 1978).
2.4.4 Basic Chromatography Techniques
The sampling of tracer gases from mine airways is an important component of a tracer gas
analysis, but the actual detection and quantification of the tracers require the use of analytical chemistry
in order to both separate the desired tracer from the rest of the compounds present in the air sample and
quantify the amount of tracer present. Both of these operations are made possible using an analytical
technique known as gas chromatography. While the foundation for the field began in the mid-1800โs with
observations from Prussian doctor Friedrich Runge, who observed procession of different compounds on
filter paper (Szabandvary, 1966), modern gas chromatography (GC) took root in the 1952 when Martin
and James separated and quantified ammonia from methylamines using what was referred to as gas-liquid
partition chromatography (Martin, James, & Smith, 1952). The rudimentary yet revolutionary device used
by Martin and James involved the use of a homemade microcolumn packed with Celite (or silica SiO ), a
2
micrometer burette, and a titration cell to separate the compounds has been replaced with housed
instruments that contain the hardware and software to separate and quantify compounds that can be
injected both manually and automatically. Compounds can be identified by the order in which they are
separated in the columns used for GC, and then quantified using the detector systems used.
The basic set-up for a modern GC instrument consist of three major regions: the injector port, the
column oven, and the detector. Seen in Figure 2-4 below, the basic layout of a gas chromatograph
involves injecting a small (less than a milliliter) amount of sample into the heated injector port, which
vaporizes the sample. The carrier gas, a high purity, inert gas, is used not only to transport the sample
through the chromatograph, but also serves as a matrix for the detector to measure the compounds of the
sample. As the vapors of the sample travel with the carrier gas through the column of the chromatograph,
certain compounds begin to interact with the stationary phase found within the column (McNair & Miller,
1997).
19
|
Virginia Tech
|
Figure 2-4. Typical gas chromatograph layout as described by McNair and Miller
The two main types of columns used are packed columns and open tubular (or capillary) columns.
Most of the GC industry has begun to transition to open tubular columns, but the function of the two
columns is the same: use the various types of stationary phases in the columns to help separate the desired
compounds. Packed columns were the original GC column used through the early 1980โs and the first to
become commercially available (Poole, 2012). These columns are typically made with 0.25 to 0.125 inch
stainless steel four to 10 feet in length and, as the name suggests, backed with various โsolid supportsโ or
particles that serve as the stationary phase for the column (McNair & Miller, 1997). Open tubular
columns are much smaller than packed columns (ranging from 530 to 100 ยตm) and longer (30 meter) and
made from drawing fused-silica to make long, thin-walled columns (Poole, 2012). Inside of these
columns the stationary phase in applied to the inner surface, with various thicknesses, to coat the inner
wall of the open tubular columns (Grob & Barry, 2004). The stationary phase for open tubular columns
can be either liquid or solid and is the primary separation force behind GC. As shown in Figure 2-5, as the
sample moves through the column, based on the stationary phase and the types of compounds present in
the sample, different compounds absorb, or partition, into the stationary phase in the column, where after
a moment or two the compounds will be released back into the mobile phase (or carrier gas) area of the
column (McNair & Miller, 1997). The absorption is due in part to the chemical nature of the compound
and stationary phase, but also relies on the flow rate of the mobile phase and temperature of the column,
which can be programmed to change as the analysis continues (Chromedia, 2014). An open tubular
column coated with aluminum oxide Al O as the stationary phase is used in separating and identifying
2 3
SF and PFT compounds. This column and phase has been usefully in previous Virginia Tech tracer gas
6
studies (Jong, 2014) (Patterson, 2011).
20
|
Virginia Tech
|
Figure 2-5. Visual representation of the separation of compounds from a sample in an open tubular column
The third region of importance in the chromatograph is the detector. There are three main detector
types commonly found in GC โ thermal conductivity (TCD), flame ionization (FID), and electron
capture (ECD). It is in the detectors that the separated compounds produce some form of electrical
response that can be recorded by the data system of the chromatograph. The response of the detector is
then reported in the forms of magnitude, or peaks, of the signal compared to the background noise. The
resulting graph is referred to as a chromatogram and consist of a baseline and series of peaks, each
representing a different compound and the amount present, although the magnitude of the peak and
amount of compound (response factor) vary from compound to cc and the reference gas (carrier gas).
Typically the carrier gases (helium or hydrogen) have high thermal conductivity (watts per meter kelvin)
values, and the presence of the analyses in the carrier reduce this value, producing a response on the data
collection system for the TCD. The thermal conductivity is measured by either using heated filaments or
thermistors in a Wheatstone bridge in most TCDs (Sevcik, 1976). FID is one of the most widely used
detectors but is limited to organic compounds due to the nature of the detector. The FID functions by
running the sample through an ignited flame source, and the resulting ions (Jorgnsen & Stamoudis, 1990).
The ions from the combusted sample create a signal in an electrode stationed above the flame (Harvey,
2014). For the tracer gas studies at Virginia Tech and analysis that requires the detection of
electronegative functional groups such as fluorine, chloride, and bromine groups, the ECD is typically
considered the best detector. The ECD detector houses a radioactive source (63Ni) that emits beta particles
into the make-up gas stream coming out of the column (nitrogen). The beta particles and N then react to
2
form N + with two free electrons (Hill & McMinn, 1992 ). What eventually is created by the constant flow
2
gas and emission of beta particles is the creation of an โelectron cloud.โ Prior to the separated compounds
entering this cloud, the ECD response is measured by a cathode. When electronegative compounds enter
the cloud, the available electrons become attached and leave the cloud with the compounds. This
produces a reduction in the cloudโs signal, or negative signal, that is then related to the presence of certain
compounds (McNair & Miller, 1997).
Regardless of the detector, the response is reported by the data system in the units of peak area
counts. These units reflect the response of the detector to the type of compound present and the amount
present. In order to determine the concentration of a specific compound within a sample, it is necessary to
develop and build a calibration curve. By injecting known concentrations of the compound in question,
one can construct a graph plotting the peak area response versus the known concentration injected. A
curve, typically linear, can then be applied to the plot to determine an equation capable of calculating the
concentration (typically in ppm or ppb) of a compound, based on the peak area counts reported by the
data system and detector (Thompson, 1977). It is important that the sample points of interest fall within
the range of points used for creating the calibration curve. Although most curves behave linearly over a
21
|
Virginia Tech
|
small range of peak area counts, the curve begins to form a power or quadratic function as the range of
points increases. Due to this, it is important that points lay along the interpolated calibration curve, and
not the extrapolated function (McNair & Miller, 1997).
2.4.5 Basics of Mass Spectrometry
Many of the fundamentals from gas chromatography are also carried over to mass spectrometry,
which is often called gas chromatography mass spectrometry (GC-MS). The injection of samples and
separation using a packed or open tubular column in a heated zone remain the same as in a typical GC
analysis, however the separated compounds travel to the mass spectrometer (MS) portion of the
instrument rather than a detector. Once entering the MS the analytes are ionized. The compounds are then
detected and identified by the mass analyzer (Niessen, 2001). The basic layout of a GC-MS instrument
can be seen below in Figure 2-6. All of the components of the MS are under a high vacuum, due to the
fact that gas continuously flows into the MS, and must also be removed at a rate that maintains the desires
operating pressure (Sparkman, Penton, & Kitson, 2011). There are many forms of ionization techniques
and mass analyzers that are used in GC-MS, but the overall result, and biggest advantage of GC-MS to
GC is that the compounds can be successful identified by mass spectrum (Niessen, 2001). The
identification of compounds in GC is based on the order in which the responses, or peaks, appear on the
chromatogram. By sampling known compounds and looking at the retention time (the time from injection
to peak) one can identify unknown compounds by the retention time. There are, however, many
compounds that can potentially share retentions times (McNair & Miller, 1997), which is why GC-MS
provides a large advantage to typical GC.
Figure 2-6. Typical GC-MS layout
In the ion source region there are three basic types of ionization that can take place: electron
ionization; chemical ionization; and negative chemical ionization. Ionization takes place because for each
molecule of the same compound, ionized under the same conditions, the same pattern and quantity of ions
will be formed. This provides a โfingerprintโ unique to each compound by which the compound can be
identified and quantified (McMaster & McMaster, 1998). Electron ionization is an ionization technique
that exposes the sample analytes to a stream of electrons from heated tungsten or rhenium filaments in the
source. The stream of electrons contains enough energy that, when coming in contact with a neutral
charge compound, the electrons interact with the valance electrons of the sample and remove one to
create a positively charged ion (Chromedia, 2014). Due to this interaction and removal of electrons, this
ionization method is sometimes referred to as electron impact. Chemical ionization relies on the
interaction between the analytesโ molecules and a reagent gas. Reagent gases used in chemical ionization
can be various, but the most common type are methane, ammonia, or isobutene. Like in electron
ionization, the reagent gas is bombarded with electrons. The ions created from the reagent gas then go to
ionize the analytes (Niessen, 2001). This ionization is referred to as โsoftโ ionization rather than โhardโ as
the ionization takes place by the analytes interaction with ions rather than be impacted by electrons as in
electron ionization (Chromedia, 2014). Chemical ionization can produce either positive or negative ions.
22
|
Virginia Tech
|
When negative ions are created, the process is them referred to as negative chemical ionization or electron
capture negative ionization (Sparkman, Penton, & Kitson, 2011). Electron ionization is considered the
most reproducible of the methods, while chemical ionization is more likely to produce the molecular ion
(molecular weight of the compound, plus a single electron) rather than fragments, and negative chemical
ionization is more efficient and sensitive than chemical ionization, but with poor reproducibility
(University of Kentucky, 2014).
While the ionization, and creation of ionization fragemnts, is an important step in GC-MS, the
quatification and indentification of the compounds in the sample takes place in the mass analyzer. To
move the fragments into the mass analyzer, a repelling plate located in the ion source is provided with a
charge of the same sign as the ions. This plate propels the fragments through a series of electronic
focusing leneses into the mass analyzer, which is under a higher, secondary vaccuum than the ion source
region of the MS (McMaster & McMaster, 1998). There are a few different types of mass analyzers used,
but the most widely used type is the quadrupole mass filter. As the name implies, this analyzer consists of
four poles, two parallel in the x-axis and two in the y-axis (assuming the z-axis is the path of the ionizaed
fragments moving through the MS), and alternating direct and alternating currents in the form of an
elctrical field created by radio frequnecies. The alternation currents, and the mass to charge ratio (m/z)
value dictate which fragments are allowed to enter the detector. If the specific m/z is created in the
quadrupole, only the correcsponding fragment with the same m/z value will remain in the ion beam
(Sparkman, Penton, & Kitson, 2011). Due to the small amount of ions avalible in the MS, the ion stream,
after being filtered by the mass analyzer, enters a continuous-dynode electron mutiplier to increase the
number of electrons entering the detector (Niessen, 2001). The detector tycally used for GC-MS is a
microchannel plate, which is a circular plate consisting of a series of hollow tubes. The electrons from the
electron mutiplier enter these tubes that continue to mutiply the amount of electrons and create an elctrical
output that is then digitized and recorded (Sparkman, Penton, & Kitson, 2011).
For the GC-MS analysis of the sulfur hexafluoride and perflourinated compounds, there are many
possible configurations of instrumentation cabable separating the SF and PFTs from other compounds.
6
SF separation and identification has been successful, and repeatedlydocumented using ECD detectors
6
(Harnisch & Borchers, 1996) (Harnisch & Eisenhauer, 1998) for GC and quadrupole mass analyzers
(Sausers, Ellis, & Christophorou, 1986). For the tracer gas studies conducted with the use of PFTs
analysis can also be completed using open tubular columns and an ECD in a GC instrument (Dietz &
Cote, 1982) (Cooke, Simmonds, Nickless, & Makepeace, 2001). Recently, the sentivity of PFT analysis
has been greatly improved with the use of GC-MS, specifically with negative chemical ionization
(Straume, Dietz, Koffi, & Nodop, 1998) (Simmonds, et al., 2002). Negative chemical ionization and GC-
MS have been able to qunaitify PFTs at concentrations approximately ten times lower than traditional
ECD methods (16 femotograms) (Begely, Foulgr, & Simmonds, 1988) (Galdiga & Greibrokk, 2000).
23
|
Virginia Tech
|
Chapter 3: Assessment of Sonic Waves and Tracer Gases as Non-
Destructive Testing Methods to Evaluate the Condition and
Integrity of In-Situ Underground Mine Seals
*Note: The following chapter was published as part of the pre-prints of the 2014 Society of Mining,
Metallurgy, and Exploration (SME) Annual Conference held February 23-26th in Salt Lake City, UT, and
also presented there. This chapter is listed as Preprint 14-048 with authors K. T. Brashear, K. Luxbacher,
E. Westman, C. Harwood, B. Lusk, and W. Weitzel.
3.1 Abstract
Since the MINER Act of 2006, the minimum static load of in-situ underground mine seals has
been increased from 20-psi to either 50-psi if monitoring is conducted or 120-psi if left unmonitored.
These minimum strength requirements in seals must be designed, built, and maintained throughout the
lifetime of the seal. Due to this, it has become necessary to assess the effectiveness of non-destructive
testing (NDT) technologies to determine seal integrity, which in this case, are explored using sonic waves
and tracer gases. Through both small and large scale testing, two NDT methods will be evaluated for their
abilities to determine integrity of the seal: a sonic wave technique to observe a change in wave velocity to
identify faults within the seal material, and a tracer gas As a NDT method, tracer gases may be used as a
potential indicator of a connection between both sides of the seal material through a series of faults and
cracks within the material itself. This paper reviews the history of underground mine seals and discusses
the overall assessment of sonic waves and tracer gases to serve as NDT methods for estimating the
integrity of these seals.
3.2 Introduction
According to the U.S. Energy Information Administrationโs 2011 Annual Energy Review,
approximately 32% of all coal mined in the United States came from an underground coal mine. This
same report also estimated that nearly 58% of all recoverable coal reserves in the United States are
located underground (U.S. Energy Information Administration, 2012). This trend indicates a shift towards
more underground coal mines in the U.S. Why is this fact important? As a larger percentage of coal
reserves begin to move underground, better technologies are going to be required to effectively and safely
produce coal. One of the primary concerns for safety in underground U.S. coal mines is the
implementation of high-strength underground mine seals. Generally speaking, there are two primary roles
for underground mine seals: ventilation and safety. In order to mitigate the ventilation requirements for
the active mining portion of an underground coal mine, that continues increase the overall size of the
active mining area, seals are used to separate the active mining areas from previously mined areas.
Inactive areas are sectioned off by constructing seals at the areas of converging airways (McPherson,
1993). According to a 2007 report, there are over 14,000 active mine seals in the U.S., in both room-and-
pillar and longwall coal mines (Zipf, Sapko, & Brune, 2007). Recent regulations concerning the
compressive strength the material used in underground mine seals have increased (Mine Safety and
Health Administration, 2011) making it important for operators to comply and maintain these standards
without disturbing the integrity of the seal. The following paper will detail and comment on two
prospective methods that may be used to evaluate the condition of these seals, without damaging the
structures.
3.3 Background
When looking at the history of underground mine seals in the U.S. three distinct eras come into
consideration based on recommended and required strength: 50-psi; 20-psi; and 120-psi. The first
24
|
Virginia Tech
|
regulations concerning underground mine seals in the U.S. appeared in the Mineral Leasing Act of 1920.
As written, the amendment (Sec. 104(a)) requires that all inactive areas of the mine be sealed with
explosion-proof and fire-proof stoppings. These stoppings were required to withstand a pressure of 50-psi
on either side of the stopping. The 50-psi strength standard came from โthe general opinion of men
experienced in mine explosion investigationsโ rather than any laboratory tests or reported field
measurements. At the time, the primary design of the seals was typically around two feet in thickness and
were made of reinforced concrete anchored into the roof, floor, and ribs of the mine (Rice, Greenwald,
Howarth, & Avins, 1931). The 50-psi standard for seal strength remained unchanged until 1969, when a
more detailed definition of โexplosion-proofโ was necessary as part of the Federal Coal Mine Health and
Safety Act. Testing was conducted by B.W. Mitchell of the U.S. Bureau of Mines, at the Pittsburgh Mine
and Safety Research Center, he determined that rarely, do pressures caused by explosions exceed more
than 20-psi on a mine seal. However, a few inaccurate assumptions prevented Mitchell from realistically
representing an explosion caused by the mixing and confining of an explosive atmosphere behind amine
seal (Zipf, Sapko, & Brune, 2007). Testing continued on seal materials, but it was not until 1992 that a
firm set of design criteria were installed into the Code of Federal Regulations. In 1991, the Bureau of
Mines looked at the designs of both pumpable cementitous foam seals and concrete blocks, both of which
met to the 20-psi requirements (Greninger, Weiss, Luzik, & Stephan, 1991). Several years later, another
report was published commenting on three additional seal designs that met 20-psi strength requirements
(Stephan & Schultz, 1997).
Despite the increase in design criteria, the 20-psi seal standard remained in place until 2007. In
2007, the MINER Act was enacted, as a direct result of the Sago and Darby mine incidents (Zipf, Sapko,
& Brune, 2007). At both mines, the accumulation of an explosive atmosphere behind newly-constructed
mine seals, and an ignition source caused explosions to occur in both mines within a five month span. At
the Sago Mine, 12 miners were killed as a result of the explosive atmosphere behind the mine seal being
ignited by lightning strikes in the area entering the seal area through cables, bolts, or the strata above the
area (Gates, et al., 2006). At the Darby Mine, five miners were killed due to welding taking place near the
surface of a recently constructed mine seal, igniting the atmosphere behind the seal (Light, et al., 2007).
The seals used in both of these mines were 20-psi designed concrete blocks that, due to the explosive
force behind the sealed area, caused a total of 13 seals to be destroyed. Due to these incidents in early
2006, new seal strength requirements were developed. Between the two incidents, the Sago explosion was
back-calculated to have generated an explosive force of 93-psi, and the Darby explosion to be 22-psi.
Because of this, the new requirements for unmonitored mine seals were divided into a three-tiered
approach, as laid out in 30 CFR ยง75.335(a) โ 50-psi seals, 120-psi seals, and greater than 120-psi seals
(Kallu, 2009).
The minimum pressure required by the new standards is 50-psi, in monitored sealed areas where
the potentially explosive atmosphere can be observed, must be designed to maintain the pressure for 4.0
seconds and then instantaneously released. In longwall mines, if the seal is used as a crosscut seal
(constructed with the retreating longwall face in the crosscut nearest the gob area in the headgate (Zipf,
Sapko, & Brune, 2007)) the 50-psi pressure only needs to be maintained for 0.1 seconds. If the sealed area
remained unmonitored, the seal strength must meet 120-psi strength. 120-psi of pressure is applied to the
seal for 4.0 seconds and then released instantaneously โ for a seal to pass strength standards it must not
fail under those conditions. Again, if the unmonitored seal is also a crosscut seal, the strength must only
be held for 0.1 seconds. There are three circumstances where seals must be designed to strengths greater
than 120-psi: the sealed area is likely to contain a homogenous mixture of methane between 4.5 and
17.0% and oxygen exceeding 17.0%; pressure piling could result in overpressures greater than 120-psi; or
other conditions are encountered, such as the likelihood of a detonation in the area to be sealed (Mine
Safety and Health Administration, 2011). These new seal requirements are not only more sophisticated,
but more stringent than at any other point in the history of coal mining in the U.S. As previously
mentioned, due to these regulations, certain tests need to be conducted to ensure that the active seals in
25
|
Virginia Tech
|
place are meeting the condition and strength requirements required by law. The concept explored in this
paper is the idea of using non-destructive testing (NDT) methods to evaluate the condition of the seal
without damaging the material. Traditionally, NDT methods consist of liquid penetration, ultrasonics,
magnetics, radiography, etc. (PetroMin Pipeliner, 2011). The small scale experiments explored in this
paper use two unique methods: sonic wave frequencies and tracer gases.
3.4 Sonic Wave Experiments
The general idea of sonic wave frequencies is that, because mass and the ability to prevent the
propagation of explosions is a major component of seal-strength design, the frequency band of each
sample of seal material can indicate the general condition of the material. The sonic wave experiments
were conducted at the Rock Mechanics Laboratory of Virginia Tech (VT) on a series of sample prepared
by University of Kentucky (UK). The specimens consisted of three different states applied to two
different types of seal material from different manufacturers. Each full, intact sample was approximately
14โ x 14โ x 12โ and poured over the summer months of 2013 with adequate curing time before
transportation to VT. For each manufacture type, one specimen was created without any faults, another
with a series of void spaces (ping pong balls) placed throughout the sample, and a final one with a metal
sheet placed at an angle through the sample to represent wire mesh or rebar commonly used in seal
material construction. The sets of samples can be seen below in Table 3-1.
Table 3-1. Sonic wave specimens used in small scale experiments at VT
Sample ID Seal Material Manufacturer Sample Description Sample State
SSA A Intact, full size Control
SSB A Intact, full size Void
SSC A Intact, full size Plate
SSD B Intact, full size Control
SSE B Intact, full size Voids
SSF B Intact, partial size (60% full) Plate
The test design of the small-scale sonic wave experiments involved a single geophone placed in
the center of each sample. A lubricating gel and electrical tape were used to provide sufficient contact
between the geophone of the surface of the specimen and to keep the geophone in place during
experimentation. An energy source was then applied to the surface of the specimen at eight different
contact points around the geophone. The contact points were evenly spaced around the geophone in a
circle with a 2 inch radius. Each of the different materials required a different energy source. In the case
of the heavier, denser seal material, manufacturer A, a Schmidt Hammer hardness tool was used as the
energy source to propagate energy through the sample. The resulting voltage change detected by the
geophone was monitored and converted into frequency using the Fourier Transform function in National
Instrumentโs LabView software. In order to better resolve the resulting frequencies, the lighter, less-dense
seal material, manufacturer B, was observed with a lower energy source applied to the specimen. For this
material, the Schmidt Hammer was replaced with a rubber hammer dropped from a height of
26
|
Virginia Tech
|
approximately four inches. For sample SSF, because the sample was only 60% full, the specimen was
turned on its side to allow for the thickness of the sample to be the same as the other samples. However,
by rotating the sample and keeping the radius of contact points the same, the number of contact points
was reduced from eight to five for sample SSF.
In order to determine the reproducibility of the energy source and monitoring from the geophone,
the energy source was applied to each contact point eight times, for a total of 64 data records for each
specimen. Due to the nature of the energy sources being applied, while relatively consistent, each strike
was different enough to prevent conducting an analysis of percent difference, between the two frequency
spectrums, as a quantitative tool to compare the difference in the frequency spans was not practice for the
NDT application. To solve this issue the two record were correlated in order to compare two frequency
ranges to one another. If the qualitative comparison (signal shape) of the two different records similar, a
higher correlation value will be provided.
Another issue to be considered in the analysis was the idea of compiling the records and creating
an average frequency range for each contact point and for each specimen. To determine if using averages
produced better results, all data records for each manufactureโs specimens were compared to their
respective averages using correlation. For all specimens, from both manufacturers, the use of averages
increased the correlation values by approximately 5%. Therefore, for the comparison of the specimens,
specifically within each manufacture group, the cumulative average frequency band (derived from the
average band from each contact point) was used.
The results from the analysis mentioned above can be seen in Figures 3-1 and 3-2 below. As these
figures show, the correlation between the control specimens, SSA and SSD, and the void specimens, SSB
and SSE, were significantly more pronounced in the manufacturer A material, rather than the
manufacturer B material. For manufacturer A, the correlation between SSA and SSC was 0.994,
indicating almost no difference between the two samples. This is most likely attributed the similarity in
densities between the two materials, which explains why the correlations between SSB and SSA and SSC
were both below 0.50. The lower correlation between the sets compared to void space specimen is most
likely caused by the difference in density between the air in the void material and the seal material/steel
plate.
The comparison between SSE, SSD, and SSF produced different results. The correlation between
SSE and SSF was the lowest correlation between all of the B manufacturer materials at 0.892.
Coincidentally, the values between the SSE and SSF samples, indicated that the void sample and the plate
sample, had the highest correlation out of the group. This contradicts the findings of the samples from
manufacturer A. The most likely cause of this was the overall smaller size of specimen SSF. As the
material of manufacturer B oxidizes, the material becomes more brittle and soft. In full-scale mine seals,
this is mitigated by wrapping the seal in a plastic liner, which was not available for the small scale
experiments. Because sample SSF was generally smaller, a larger percentage of the volume had oxidized
and would be more similar to the void sample SSE than the controlled, less oxidized sample SSD.
Similarly, this rational explains why the correlation between SSD and SSE was significantly more similar
to the correlations of SSD and SSF then the counterparts from manufacturer A. Another factor that
affected the correlation differences between manufacturers A and B is the natural densities of the
material. Manufacturer A provided better distinction between the control and voided samples, because the
density between the material and air is significantly greater than the density between air and manufacturer
Bโs material. These density values, and values from other materials can be seen below in Table 3-2.
27
|
Virginia Tech
|
Figure 3-1. Average frequency bands for manufacture A small scale samples, and the corresponding
correlations between sample sets.
Figure 3-2. Average frequency bands for manufacture B small scale samples, and the corresponding
correlations between sample sets.
Table 3-2. Density of seal materials and other materials present in small scale sonic wave experiments
Material Density (lb/ft3)
Manufacturer A 298.6
Manufacturer B 55.32
Air 0.0811
Steel 488.0
Water 62.43
Overall, the small-scale single geophone experiments had some error associated, mostly due to
the natural oxidation of manufacturer Bโs seal material, but did consistently show a difference in the
frequency band between the different specimen types. Of note, the correlation between the control
specimen and the specimen with void spaces was consistently the lowest, and should be the easiest
integrity issue to detect using a single geophone. However, the small-scale experiment did not exceed a
thickness of 14 inches, making it necessary to develop a series of full-scale experiments to test the
effective thickness detectable. Future experiments, discussed later, will be developed to evaluate the
effective thickness and effectiveness of the single geophone frequency method on larger sample sizes.
28
|
Virginia Tech
|
3.5 Tracer Gas Experiments
The concept of using a tracer gas as a NDT method is that an increase in flow of gas through the
seal might indicate faulting or an increase in pore space in the material, which may become an integrity
issue. Tracer gases, are non-toxic, not naturally occurring gases that can be easily detected using trace
analysis methods such as gas chromatography (Patterson, 2011). For the tracer gas experiments, all
testing and analysis was completed at VT in both the Ventilation Laboratory and the Subsurface
Atmosphere Laboratory. This group has recently utilized both sulfur hexafluoride (SF ) and
6
perfluoromethylcyclohexane (PMCH) as tracer gases (Patterson, 2011); therefore, one of the first
experiment aimed to determine if either of the tracer gas types were capable of moving through samples
of the seal material. The original experiment was set to measure the mass change of two samples that
were enclosed and surrounded by each tracer, leaving one exposed surface to the atmosphere. A
cylindrical sample of seal material was surrounded by PVC piping to provide a container around the base
and side of the sample, leaving the top exposed. A sampling port was built into the side of the container
so the tracer gases could be applied inside. A second sampling port was created on the top of the sample
by boring out a shallow, small diameter core in the sample and covering the opening with a silicone
septum and epoxy. Two of these vessels were created: one for SF ; and another for PMCH. An example
6
of one of these vessels can be seen below in Figure 3-3.
Figure 3-3. Tracer gas small scale experiment vessel used to determine which gas will move through the seal
material sample. Photo by author, 2013
Because the original experiment was designed to inject mass of each tracer into the container and
measure the mass change, a large amount of tracer had to be applied to the vessel, 0.20 grams. The
silicone septum was installed to allow for syringe sampling and gas chromatography analysis of the space
within the seal. This determined the presence of the tracer gas within the seal, indicating that the tracer
gas did permeate through the seal material. However, due to equipment error, the original mass change
experiment had to be forgone to another analysis. The equipment error made it impossible to accurately
and consistently measure the mass of the vessel. Therefore, the gas selection experiment was changed into
a trace analysis experiment, where the concentration of the tracers were measured in the container, where
the tracers were injected, and also measured in the core sampling port of each sample. The issue with
conducting a trace analysis experiment on the vessels was, because the experiment was originally
designed to measure mass change of the tracer through the seal material, a considerably large amount of
tracer was applied within the vessel. By having a large amount of tracer present (on the scale of grams
rather than picograms) running a trace analysis could potentially result in faulty results and overloading
the column or detector used in the gas chromatography-based results used in trace analysis (McNair &
Miller, 1997).
29
|
Virginia Tech
|
Below, the results of the trace analysis can be seen in Figures 3-4 and 3-5. The SF tracer samples
6
taken from both the container and the core of the vessel were acquired from a syringe in 2.5-ยตL volumes
and then injected into the gas chromatograph. The PMCH trace analysis, because of the increased
response to the electron capture detector (ECD) used in the gas chromatograph, was injected in 1-ยตL
amounts. Figure 3-4, the SF analysis, shows a very consistent decrease in the amount of tracer within the
6
core of the sample. It does demonstrate that within a single day of applying the tracer to the outside of the
sample, the tracer moved approximately two to three inches through the seal material. Over the next two
weeks, the amount continued to decrease, but the presence of the tracer was still easily detectable. Figure
3-5, the PMCH analysis, shows the concentration of the PMCH tracer within the core of the vessel. The
PMCH tracer, while less consistent that the SF results, continues to show the general decrease in the
6
concentration of the tracer within the core, and detectable presence of the tracer at least two weeks after
the original application of the tracer. The most likely cause of variation in PMCH results is the small
sample size. For gas samples, the smallest sample size that produced consistent results is approximately
5-ยตL. Because the needle of the syringe contains a head space of about 0.5-ยตL, smaller samples are easily
affected by the error caused by this amount of head space. Regardless of the potential error and variations
in the samples, both tracer gases showed the potential for movement through the seal material, even when
obvious structural defects were not present.
Figure 3-4. Relative concentration of SF in the core of the seal material
6
Figure 3-5. Relative concentration of PMCH in the core of the seal material
The final small-scale tracer gas experiment was conducted after the gas selection experiments to
determine if a significant reduction in the amount of tracer would still penetrate through the seal material.
For this experiment, another cylinder of seal material was drilled to make a hallow core, this time to the
center of the sample. Inside this core, a PMCH passive release source (PPRS) developed by researchers at
30
|
Virginia Tech
|
VT for the passive release of the tracer in a small easy to deploy canister was placed. The PPRS container
is a single-piece aluminum shell completely enclosed with the exception of one end. A small amount of
liquid PMCH is injected into the shell and then closed with a silicone rubber cap (the design for the PPRS
was originally developed by Brookhaven National Laboratories (Dietz, Goodrich, Cote, & Wieser, 1986)
and modified at VT by Edmund Jong (Jong, 2014)). As the PMCH vaporizes in the container, the gas
saturates the silicone cap and then is released at consistent linear rate of approximately 0.0005 grams/day.
Once the PPRS is placed in the core of the seal material, it is capped with a bromobutyl/chromobutyl
rubber septum (one of the only rubbers not permeable to the PMCH) and used to seal the core. This
provides the only method of travel for the PMCH the seal material itself. The seal sample is then in closed
in a PVC container with a sampling port to gather samples of the concentration of PMCH that has left the
core through the seal material. A trace analysis of the sample will be conducted on the GC-ECD as the
other experiments. The trace analysis for this experiment did use a larger sample size, 10-ยตL, to avoid
error associated with smaller sample sizes. Figure 3-6 shows the container prior to being sealed.
Figure 3-6. Tracer gas small scale experiment vessel used to monitor small release of PMCH through seal
material. Photo by author, 2013
The results from the PPRS experiment can be seen below in Figure 3-7. One of the differences
between this experiment and other trace analysis experiment is the development of a calibration curve. In
order to determine exact concentrations of PMCH, a calibration curve is developed by plotting known
concentrations of PMCH versus the peak area responses generated by the gas chromatograph. The results
in Figure 3-7 were created by taking the peak area response from the gas chromatograph and determining
their concentrations from the equation developed off the calibration curve in Figure 3-8.
31
|
Virginia Tech
|
Figure 3-7. Concentration of PMCH released from the PPRS that move through the seal material to occupy
the atmosphere of the vessel
Figure 3-8. Calibration curve used to determine the concentration of PMCH for each peak area count
reported by the GC 2014
As seen in Figure 3-7, there is a strong correlation between the hours of release and the
concentration of PMCH within the PVC container. After only four hours, the concentration of PMCH had
already reached approximately 30.7 ppb and peaked at almost 2,800,000 in the atmosphere within the
vessel after about ten days. This demonstrates that even after only a few days and a small amount of
PMCH released from inside the seal material, the atmosphere inside the PVC container reached nearly
0.28% pure PMCH. Note, data points after approximately 75 hours fall outside of the calibration curve.
However, due to the high RSD seen in Figure 3-8, and the goal of observing a general trend, data point
after 75 hours were extrapolated using the equation gathered from the calibration curve. Collectively, both
32
|
Virginia Tech
|
Chapter 4: Use of Perfluoromethylcyclohexane (PMCH) as a Novel
Non-Destructive Testing (NDT) Method to Evaluate In-Situ
Underground Mine Seals
*Note: Contents from this chapter were submitted (along with small-scale related work) for publication
from the International Journal of Mining and Mineral Engineering under the title โAssessing the Use of
Perfluoromethylcyclohexane as a Novel Non-Destructive Testing Method to Evaluate In-Situ
Underground Mine Sealsโ by Kyle Brashear.
4.1 Background
Non-destructive testing (NDT) technologies are important evaluation tools used to interpret
integrity issues in structures throughout the world. Structural integrity is difficult to measure in-situ and
can compromise the safety and function of many built structures. A 1990 National Science Foundation
(NSF) project found that 42% of U.S. bridges were inadequate for their current needs, mostly due to the
age and degradation of the concrete used during construction of these bridges. Similar integrity issues
have been reported in numerous structures throughout the U.S. (Chong, Scalzi, & Dillon, 1990). The
purpose of an NDT method is to โdetect and locate the anomalies within an optically opaque medium
through appropriate imaging techniques.โ In the case of concrete and similar structures, NDT methods are
often used to examine bodies for voids, cracks, delaminations, and deterioration zones (Buyukozturk,
1998).
In underground coal operations, concrete-like structures are utilized to isolate certain portions of
the mine. These structures, known as seals, are used to minimize the volume of workings requiring
ventilation, reduce maintenance and inspection requirements, as well as to prevent the propagation of
explosions in the sealed areas to the working areas. By definition, seals, as opposed to stoppings (another
form of underground ventilation and safety control) must be explosion-proof (McPherson M. J., 1993)
and withstand explosive pressures of 50 or 120 psig (Title 30 Code of Federal Regulations Part 75.335-8).
One of the most widely used seal materials employed in underground mines is pumpable cement which
can be mixed on the surface or in the mine, and then pumped into a form to create the seal. These seals
range from up to 30 feet tall and 100 feet wide to a few feet to 12 feet in thickness (Mine Safety and
Health Administration, 2014). While the U.S. Mine Safety and Health Administration (MSHA) has a
rigorous application and approval process for approving the strength and quality material to be used in
underground mine seals (30 CFR ยง 75.335 (b)) there are no current suggestions on how to monitor the
actual seals material once it has been installed. Implementation of NDT methods can allow for evaluation
of seals post installation.
Perfluorocarbon tracer (PFT) studies are experiments conducted typically to quantify and map
ventilation patterns in buildings and structures. These objectives are completed by monitoring the
movement of an anthropogenic inert gas that is introduced into the airflow (Sandberg & Blomqvist,
1985). Since the early 1980โs, PFT studies have almost exclusively been used to map the movement of air
in large openings (hallways, ventilation ducts, mine entries, etc.) (D'Ottavio, Senum, & Dietz, 1988).
Currently, little to no work has been done on the movement of PFTs though solid, porous media, such as
concrete and pumpable mine seals. However, PFT studies have been performed to measure the
breakthrough of geologically sequestered CO in brine-baring sandstones in Texas (Phelps, McCallum,
2
Cole, Kharaka, & Hovorka, 2006) and to monitor CO leakage in a sequestration and storage project in
2
the San Juan Basin (Wells, Diehl, Strazisar, Wilson, & Stanko, 2013). These two projects, and many
more, show that it is possible to monitor perfluorocarbons that have moved long distances and through
solid media such as sandstone, shale, and soil.
34
|
Virginia Tech
|
One such PFT that has been used in recent years as a tracer gas in geological (Phelps, McCallum,
Cole, Kharaka, & Hovorka, 2006) and ventilation (Sandberg & Blomqvist, 1985) based studies is
perfluoromethylcyclohexane (PMCH). PMCH is a non-toxic, liquid at room temperature compound that
is inert and not naturally occurring. PMCH has a boiling point of 52ยฐCbut is volatile enough to evaporate
at standard room temperature and pressure. The vaporization pressure of the compound ranges from 3.2
psi to 19.6 psi (22.1 and 135 kPa), depending on the temperature (20ยฐC to 60ยฐC, respectively). PMCH has
a high density in its liquid state, 1.99 g/ml, which is about twice the density of water (Rowlinson &
Thacker, 1957). While PMCH and other PFTs have not been well documented for their use in mine
environments, they have been used in tunneling studies, including the airflow mapping of the New York
City subway system as part of the Subway-Surface Air Flow Exchange (S-SAFE) project (Brookhaven
National Laboratory, 2013). The use of PMCH, specifically, as a mine-related tracer is limited to a single
field study conducted by Jong in 2013 where PMCH was used simultaneously with sulfur hexafluoride to
characterize the ventilation around a longwall panel in a Western underground coal mine (Jong, 2014).
The following paper documents the novel use of PMCH as an NDT method to examine and comment on
the integrity of underground mine seals through two experiments โ large-scale pipes containing
controlled and faulted samples and a full-scale free standing seal.
4.2 Virginia large-scale experiment design
At an underground limestone mine in Virginia, an experiment was conducted to study the travel
distance through different types and conditions of pumpable seal material using PMCH released from a
permeation plug passive release sources (PPRS). The experiment was conducted at a working mine site to
simulate conditions similar to an underground coal mine. However, the flow across the exposed face of
the seal material in the Virginia mine was significantly lower, due to the lower ventilation requirements of
underground limestone mines compared to underground coal. The experimental apparatus at the Virginia
mine consisted of four, 12 foot-long (3.6 meters), 8.0 inch (20.3 centimeter) diameter PVC (Polyvinyl
chloride) pipes laid in a dead-end crosscut (previously used for equipment storage) in the main entry of
the mine. This crosscut was located approximately 200 feet from the portal. The 12 foot length of these
pipes were designed to represent an approximate thickness of a typical pumpable mine seal used in
underground coal mines. As with previous small-scale experiments conducted by this author exploring
PFTs and pumpable mine seal material (Brashear, et al., 2014), two mine seal manufacturers were used to
provide a comparison.
Two pipes were filled with material provided from an international manufacturer of seal material,
and two more from a second manufacturer. Due to the small amount of material needed to fill the pipes
(approximately 33.5 cubic feet of material) mixing was done on the surface of the mine (with both hand
mixers and a portable 12 cubic foot capacity cement mixer), then buckets of the mix were poured into
pipes. In order to explore whether faulting or discontinuities in the seal material affected the flow of
PMCH through the material, one pipe from each manufacturer was made with an engineered fault. These
faults were created by filling the pipe half way, then curing was allowed to take place while the pipe was
laid along the horizontal. After curing occurred and the bottom half of the pipe was filled with dried and
hardened seal material, another batch of mix was prepared and the remaining half of the pipe was filled.
After the second curing period, the faulted pipe was ready with a discontinuity running along the length of
the pipe. A summary table of experimental samples can be seen below in Table 4-1. Figure 4-1 shows
research associates assisting in the construction of the pipe samples.
Table 4-1. Summary of labeling and condition of the large-scale pipe samples
Sample Number Material Used Condition of the Sample
1 Manufacturer A Control
2 Manufacturer A Fault
35
|
Virginia Tech
|
3 Manufacturer B Control
4 Manufacturer B Fault
Figure 4-1. Filling of one of the pipes used in the large-scale experiment in Virginia. Photo by author, 2013
Upon the final completion and curing of all four pipe samples, the samples were transported to
the mine site and placed in the previously described area of the mine. Once on the mine floor, sampling
ports were drilled into all four pipes at 1.5 foot (.46 meters) intervals. These ports were drilled with a ยผ
inch (0.64 cm) diameter diamond-tipped drill bit. The ports were drilled to the approximate center of the
sample (4 inches, excluding the thickness of the PVC pipe). Seven total ports were drilled into each
sample. In each hole, a 5 to 6 inch long piece of abrasion resistant Tygonยฎ tubing was placed and sealed
in place with a quick drying epoxy resin. The top of the tubing was covered with a Supelco
Thermogreenโข LB-2 septa to separate the atmosphere within the tubing and the seal material from the
mine atmosphere. Upon completion of the sampling ports installation, the last step in construction of the
four large-scale mine seal pipes was to apply the tracer to one end of the sample. This was achieved by
placing three PPRS vessels into one side of the pipe, and then sealing the face with PVC cement and
appropriate PVC cap. The other side of the sample remained open, exposed to the mine environment and
atmosphere. Figure 4-2 shows a diagram of the experimental design and the actual sample in place at the
mine.
Figure 4-2. Experimental layout of the large-scale samplesโ (A) schematic of the tracer release and sampling
ports, (B) pipes in-situ, and (C) the sampling port. Photo by author, 2013
36
|
Virginia Tech
|
The test sampling procedures consisted of collecting the atmosphere within the cored seal
material and Tygonยฎ tubing. These samples were collected with 10.0 milliliter vacutainers. Vacutainers
typically ship with at least partial vacuum, and were further vacuumed in the laboratory to ensure
consistency and minimal sample dilution. The sample is introduced to the container with a double-ended
needle. One end of the needle was inserted in the septa cap on the Tygonยฎ tube, and then the prepared
vacutainer was applied to the other needle end to pull the atmosphere from within tube into the
vacutainer. Samples were collected in separate vacutainers from each of the seven ports from each of the
four sample pipes over a series of six trips to the mine. The duration of the experiment from enclosing the
PPRSs within the sample to the final collection date was seven weeks. Six total sample dates were
included as part of the experiments, with the seven samples from each pipe and four total pipes, a total of
168 vacutainer samples were collected from the large-scale mine and returned to the laboratory.
4.3 Virginia large-scale experiment results
Once the large-scale sampling was completed, method development began using a Shimadzu
2010 GC-MS (gas chromatograph-mass spectrometry) to confirm the presence of PMCH within the
samples and to quantify the amount of PMCH within each of the vacutainer samples. The method file
developed and used throughout the large scale experiments is shown below in Table 4-2.
Table 4-2. Summary of GC-MS method file used for large-scale samples
PMCH GC-MS Method File Conditions
Column Length 30 m
Inner diameter .25 mm
Film thickness 5 ยตm
Stationary phase HP-PLOT Al O
2 3
Linear velocity 45 cm/s
Total flow 72.4 mL/min
Column temperature/time 185 ยฐC (isothermal)
3.5 min
Carrier gas Helium
Injector port temperature 150 ยฐC
Ion source temperature 200 ยฐC
Interface temperature 185 ยฐC
Sample volume 50 ยตL
SIM 350 m/z
Event time 0.15 min
The 168 samples were analyzed under the method displayed in Table 4-2, in triplicate, and the
average value was reported. Analyzing the samples in triplicate and reporting the relative standard
deviation (RSD) allows for monitoring of precision when manually injecting a sample. While some
samples were below detection limits for PMCH, 127 of the 168 samples contained PMCH ranging in peak
area counts (electrical response from GC integrated over time) of 65 to 9,626,870.
37
|
Virginia Tech
|
Figure 4-3. Mass spectrum result from PMCH standard run using 2010 GC-MS and method file in Table 4-2
After the method file had been properly optimized, the calibration curve and PMCH confirmation
process commenced. The confirmation was achieved by injecting diluted (to approximately 100 ppm with
a hexane base) samples of technical grade PMCH. The resulting mass spectrum, seen in Figure 3,
confirmed a large spike at the mole to charge ratio of 350 M/z, which matched the response of the PMCH
samples. This sample of PMCH was also used for the calibration curve developed to determine the
relationship between known concentration of PMCH in a sample and the response seen from the GC-MS.
Based on the observed range of peak area counts (65 to 9,626,870) a calibration curve was
developed. Data points used for the calibration curve were determined by preparing standards with a
known concentration of PMCH and analyzing these standards in the GC-MS via the same method as the
mine pipe samples. The graph of those data points can be seen below in Figure 4-4. Concentration is
reported in ppb (parts per billion by volume). A correlation of 0.9972 indicates a strong relationship
between the observed data and the equation seen in Equation 4-1. By using Equation 4-1, the exact
concentration for each of the large-scale samples can be determined.
Figure 4-4. Calibration curve for the large-scale samples
38
|
Virginia Tech
|
4.4 Kentucky full-scale experiment deign
The second set of experiments conducted with the PMCH tracer gases took place at the
University of Kentucky underground research area in an underground quarry in Georgetown, KY. On site
two large, free-standing, full-scale mine seals were constructed with Manufacturer B seal material under
best capable standard seal construction procedures. These seals had dimensions of approximately 20 feet
by 12 feet, with a height of six feet (6.1 meters by 3.7 meters by 1.8 meters). One of the samples was
chosen for the full-scale tracer experiment. In this sample, a ยฝ inch diameter diamond-tipped drill bit was
used to core holes into the top of sample. Five locations on the top of the sample contained two cores, one
drilled at three feet (0.9 meters) and an adjacent core drilled one and half feet (0.46 meters). Within these
cores, different lengths of Tygonยฎ tubing were installed to the bottom of the hole and then run to the top
of the sample, where they were sealed with a quick-drying foam epoxy. The tops of these tubes, like the
large-scale samples, were covered by septa to keep the atmosphere within the tube separate from the mine
atmosphere. At the center of these five locations, a single core was drilled to a depth of three feet. At this
hole, three PPRS were wrapped in coarse dry-wall tape and then placed at the bottom of this hole. The
dry-wall tape was used to provide some air pockets and pore space around the release sources. After the
three PPRS were placed into the core, the remaining space above the sources was backfilled with newly-
mixed seal material to seal these sources within the approximate center of the full-scale samples. Figure
4-7 shows the basic layout of the sample, the release point, and the 10 sampling locations.
Figure 4-7. Layout of the Kentucky full-scale experiment seal
After the curing of the backfilled seal material over the PPRS, sampling took place similar to the
procedures followed for the large-scale samples. Empty vacutainers were used to collect the atmospheric
sample from within the Tygonยฎ tubing. Samples were taken 23 days after the PPRSs were installed and
again, 82 days after the original sealing of the PPRS. Because of the length of Tygonยฎ tubing compared
to the tubing used in the large-scale samples, an aspirator was used to move the โpocketโ of air at the
bottom of the tubing and in surrounding material to the intersection of the septa and the sampling needled
to be collected by the vacutainer. A sample was taken from both the three foot and one and a half foot
depths at each of the five sampling locations (A through E) on each sampling day. On the second sample
collection date, a group of three extra vacutainer samplers were collected above the seal itself to look for
trace PMCH in the atmosphere above the seal.
41
|
Virginia Tech
|
Figure 4-8. Model of approximate PMCH concentrations (in peak area counts) found within the full scale seal
(Note: the left side of the model is oriented towards the center of the mine entry)
4.6 Discussion
The large-scale experiments were designed to determine multiple factors: if PMCH tracer gas
(approximately 1.5 mg/day) can move through solid seal material of distances similar to those seen in in-
situ underground seals (12 feet), and if the tracer movement is accelerated and/or concentration of the
tracer is higher as it moves through known discontinuities. By the last two sample dates, all four of the
samples had traces of PMCH at 10.5 feet away from the release source. Given enough time, it is possible
that the entire volume of the seal material would become saturated and PMCH would release, fairly
linearly, from the exposed face of the seal material. The second component of the large-scale experiments
indicated a general increase in the concentration of PMCH moving down the length of the faulted samples
for both manufacturers when compared to the control samples. This result indicates that PMCH can be
utilized to demonstrate an increased fracture network or faulting within in-situ mine seals that may
indicate potential integrity issues. The primary concern with the large-scale experiment was that the
observed PMCH may have been traveling along the boundary between the PVC pipe and seal material.
To address that possibility, the second set of experiments โ in a full-scale, freestanding seal โ were
designed. The results of the second set of experiments show that PMCH can move large distances
(between four and nine feet) in seal material with no boundary interface and minimal pressure differential.
The full-scale experiments also demonstrated that the turbulence of the airflow across the exposed face of
a seal (containing a PMCH release vessel) can potentially affect the direction of flow within the seal. The
air samples from the second full-scale sampling date also support the theory that, given enough time, a
mine seal with a tracer release source within it may eventually reach some equilibrium and begin
releasing fairly constant concentrations into the surrounding atmosphere. In summary, both the large-scale
and full-scale experiments have provided significant data and results confirming the movement of tracers
through well-mixed seal material samples and an increase in that movement can potentially be seen in
faulted or damaged material.
4.7 Acknowledgement
This publication was developed under Contract No. 200-2012-52497, awarded by the National
Institute for Occupational Safety and Health (NIOSH). The findings and conclusions in this report are
those of the authors and do not reflect the official policies of the Department of Health and Human
Services; nor does mention of trade names, commercial practices, or organizations imply endorsement by
the US Government.
43
|
Virginia Tech
|
Chapter 5: Technical Note: Use of the Sonic Wave Impact-Echo
Non-Destructive Testing (NDT) Method on Mine Seals in a
Kentucky Underground Limestone Mine
5.1 Background
In underground mines, specifically underground coal, as mining progresses through the coal seam
mined-out areas or abandoned areas must be isolated from the working sections of the mine. This practice
minimizes ventilation requirements in active areas of the mine and separates active areas from areas likely
to contain explosive atmospheres. The structures used for this purpose are called seals and are required to
be explosion proof to prevent the propagation of an explosion, if one was to occur in the sealed regions of
the mine, to the working areas. For years, seals were constructed by building two or more barriers,
typically made with cement blocks or timbers that covered the entire cross sectional area of the mine
airway, with five to 10 meters of spacing between the barriers. The space between these barriers was
filled with inert material, and sometimes grouting was placed in the strata around the seal to improve
structural integrity (McPherson, 1993). Currently, one of the most popular seal construction techniques
involves using pumpable cement to fill the area between the two barriers, or filling wooden molds or
flexible bags with the cement. The pumpable cement can be mixed in suitable areas, whether in the main
entries or even from the surface of the mine, and then pumped to the seal site to form a tight seal along
the top, bottom and ribs of the airway (United Stated of America Patent No. US5401120 A, 1993).
Following the Sago and Darby mine disasters of 2006, NIOSH (National Institute for
Occupational Safety and Heath) made formal recommendations to increase the explosive pressure
strength of seals installed in underground coal mines from 20 psig (pound per square inch gauge) to either
50 psig or 120 psig, depending on whether the atmosphere behind the seal is monitored. The
recommendations made by NIOSH were eventually incorporated into Title 30 Code of Federal
Regulations (30 CFR ยง 75.335-338) as part of the Mine Improvement and New Emergency Response Act
(MINER Act) of 2006. While it is legally required for all manufacturers of seals and seal material to
submit applications to the Mine Safety and Health Administration (MSHA) and pass simulated explosion
testing before becoming approved (Zipf Jr., Sapko, & Brune, 2007), there has yet to be any requirements
for testing the integrity and retained strength of these seals after they are installed. The practice of
examining concrete and concrete-like structures without damaging the structures is known as Non-
Destructive Testing (NDT) or Evaluation (NDE) and is used to examine the internal condition of
structures beneath the exterior surface, even when only a single surface is accessible (Krause, et al.,
1997).
There are many different techniques and methods used to produce information about the physical
properties and condition, of civil structures including sonic/ultrasonic methods, electromagnetic methods,
electronic methods, and radiography methods (McCann & Forde, 2001). This paper will specifically
discuss the use of sonic waves, at low frequencies (<400 Hz), to assess the condition of both large-scale
and full-scale mine seals using an impact-echo method. The impact-echo method involves the use of an
impact-based energy source being applied to the surface of the structure in question and recorded by
velocity or vibrating transducers (or other form of frequency measurement) (Davis & Dunn, 1974). The
resulting signal then goes through either a Fourier transform or frequency response function to generate a
frequency range that can be used to observe resonant frequencies of flaws in the structure. The general
design of the NDT equipment and example frequency spectrums can be seen below in Figure 5-1
(McCann & Forde, 2001).
44
|
Virginia Tech
|
Figure 5-1. Example from McCann and Forde demonstrating the impact-echo system
5.2 Experimental Design
At an underground quarry in Georgetown, KY, an underground research area has been prepared
for the University of Kentucky (UK) Mining Engineering Department. The mine itself is an underground
mine producing aggregates and asphalt to the northern Lexington, KY area. The area of the mine used by
researchers from UK contains electrical power, storage units, shock tubes for explosive testing, and other
research equipment useful for mining related experimentation. The Georgetown mine houses the two
types of seal material used for the NDT sonic wave experiments, both large and full-scale mine seal. The
large mine seals are approximately 64 ft3 cubes of seal material with various features, mixing constraints,
and engineered integrity issues. A total of 14 large-scale seals were designed and poured using two
different seal material manufacturers. Apart from the large-scale seal cubes, two full-scale seals were
created using the material from a single manufacturer. These full-scale samples are free-standing seals
crated in the rough dimensions of a typical mine seal (20โ by 6โ, with a thickness of 12โ). Table 5-1 below
shows an inventory for the large and full-scale seals housed at the Georgetown Mine.
Table 5-1. Seal material samples present at the Georgetown mine
Sample Manufacturer UCS Feature 1 Feature 2 State Mix Ratio
Name (psi)
Large A B 842 Thermocouples Fractures Consistent Improper
Large B B 858 Thermocouples Regular Inconsistent Improper
Large C B 1302 Thermocouples Fractures Desiccated Correct
Large D B 4212 Thermocouples Regular Consistent Improper
Large E B 942/792 Thermocouples Voids on rear Desiccated Improper
Large F B 1439 Thermocouples Small voids Consistent Improper
Large G B 703 Thermocouples Regular Consistent Improper
Large H A N/A Control Regular Consistent Correct
Large I A N/A N/A Voids/ Consistent Correct
Styrofoam /
trash &
debris
Large J A N/A Rebar Regular Consistent Correct
45
|
Virginia Tech
|
Large K A N/A Rebar Voids/ Consistent Correct
Styrofoam /
trash &
debris
Large L B 731 Control Regular Consistent Correct
Large M B 742 N/A High density Consistent Correct
anomaly
(limestone)
Large N B 704 N/A Small and Consistent Correct
Large voids
Full 1 B 975 Control Regular Consistent Correct
Full 2 B 726 Control Regular Consistent Correct
Two sets of experiments were planned for the large-scale and full-scale samples at the mine. The
large-scale experiments attempted to scale-up similar experiments conducted by Virginia Tech (VT) using
small samples (1 ft3 cubes) of varying materials and integrity issuesโ voids, faults, etc. These small-scale
experiments utilized a single geophone and energy source in an attempt to use the sonic waves produced
by the source and the recorded frequencies of the geophone, as an echo-impact NDT method. To prepare
for the large-scale experiments, the tops of the large samples A-N (excluding the dissected samples of C
and E) were ground down with a cement grinder to provide a smooth surface on which to place the
geophone. The grinding was in a small area, roughly five square inches and only removed a small volume
from the top of the sample. While it is possible internal damage was created in some of the samples, no
cracks or faulting could be seen on the surface of the samples. It can be assumed that if any damage was
caused by the grinder, it was applied to the samples consistently, and the samples can still be used for
comparison. Figure 5-2 shows the grinding process atop one of the large scale samples. The geophone
was applied to the surface of the seal material samples, using a silicone gel to provide a good contact and
interface between the geophone and seal material. For each of the 12 samples for the large-scale
experiments, a base reading was taken with the geophone in place to determine the background voltages
and frequencies detected by the geophone in the mine environment. Then an energy source was applied
approximately 10 inches away from the geophone, using the same distance scale used in the small scale
experiments. Five total energy impacts where applied and recorded for each sample.
Figure 5-2. Grinding smooth surface for geophone placement on top of the large scale samples. Photo by
author, 2014
46
|
Virginia Tech
|
The full scale experimentation conducted in the Georgetown mine involved only one of the full-
scale samples (Full 1). The goal of the full-scale experiment was to determine the ideal distance for which
the geophone and energy source should be placed from one another. To do this, the geophone was placed
on one of the vertical sides of the seal, approximately three feet from the edge. This provided
approximately 17 feet on the other side of the geophone to provide the energy source at every 1.5 foot
interval. First, the background voltage and frequencies from the geophone were recorded to get a baseline.
Then, starting at 1.5 feet away from the geophone, an energy source was applied, and the response was
recorded by the geophone. This process was repeated three times at each location, ranging from 1.5 feet
from the geophone to 15 feet away. A total of 10 locations were collected to compare to one another and
to the background baseline collected earlier. Figure 5-3 shows researched participating in the full-scale
experiments.
Figure 5-3. Researchers from VT and UK holding the geophone in place and applying the energy source on
the full scale seal. Photo by author, 2014
5.3 Large Scale Results
The large-scale experiments were divided into groups based on the seal material manufacturer.
The five frequency ranges collected at each sample were then averaged and graphed to see if samples
with different integrity issues can be distinguished from control samples by reviewing the frequency
ranges. First, looking at the correctly mixed Manufacturer B material, samples L, M, and N were
compared. The resulting comparison can be seen in Figure 5-4. It can be determined that with the correct
mixes, the sample containing large void spaces (deflated dodgeballs) can be distinguished from the other
two samples with a high peak around 50 Hz, most likely due to the movement of the energy through air
pockets in the void spaces. Secondly, sample L was compared with two regular, improperly mixed
samples with different USC psi values of 4212 and 703 for samples D and G, respectively. This
comparison can be seen in Figure 5-5. Even with the difference in USC values, the shape of the improper
mixes was nearly identical to the correct mix, although the amplitude of the frequency range was smaller.
The final Manufacturer B comparison looked at potential differences between fractures in sample A, an
inconsistent but regular seal in sample B, and sample F, which contained some small voids. Figure 5-6
shows the frequency range of these samples, and it is noticeable that the fractured sample behaved very
similarly to the background noise, and produced no distinguishable peaks, while the small voids and
regular, improperly mixed sample are fairly similar. Out of the Manufacturer B comparison, only the
large voided sample that was correctly mix was noticeably different when compared to other samples.
47
|
Virginia Tech
|
Figure 5-6. Fractured samples compared to small voids and a regular sample of Manufacturer B material, all
improperly mixed
The second seal material compared was manufactured by Manufacturer A and Figure 5-7 shows
the comparison of regular seal samples with samples with voids and then introducing rebar to both sample
times to create a total of four samples (H, I, J, and K). The frequency ranges for each of these samples can
be seen in Figure 5-7. Even with the introduction of rebar to the samples (similar to the rebar that will be
installed in the ribs of an in-situ underground mine seal to help hold it in place) there was minimal
difference between the samples. It is worth noting that during the data collection of all the samples, roof
bolting was taking place in a cross-cut near the samples. Specifically, the Manufacturer A samples were
located closest to the bolting. This activity produced a lot of background noise and vibrations in the
samples which seemed to reduce the effects of the energy source being applied to the samples.
Figure 5-7.Manufacturer A material frequency ranges for regular samples, voided sampled, and rebarred
samples
49
|
Virginia Tech
|
5.4 Full Scale Results
The full-scale experiment collected three sets of frequency ranges for each distance interval. The
average of these ranges at each interval was then graphed to determine how the energy source and
geophone response is affected by the distance between the two in a seal with a thickness of 12 feet. Each
frequency range recorded by the geophones was averaged, and the resulting average frequency was used
to represent the response for each energy source and each of the 10 locations. The average frequency was
graphed between 0 and 500 Hz for each of the 10 distances. Figure 5-8 shows these average frequencies
for distances of 1.5, 3, 4.5, 6, and 7.5 feet, while Figure 5-9 shows the remaining distances (9, 10.5, 12,
13.5, and 15 feet). While Figures 5-8 and -9 show the frequency range of interest for the full scale
experiment, it also necessary to comment on the variance of amplitudes at different distances. This is
most likely due to frequency changes through the seal material as the distance between the energy source
and geophone increases, and the random nature occurred by having to hold the geophone in-place by
hand. To simulate a single available face for seals found in functioning mines, the geophone was held
along the vertical face. To reduce some of the error found in the full-scale experiments it may become
necessary to develop a device to hold the geophone in place without drilling into or anchoring it into the
seal face. Another option might be to install a geophone in the face of the seal during construction and
allow curing to occur and hold the geophone in place.
Figure 5-8. Frequency ranges for the full scale sample showing distances of 1.5 to 7.5 feet
50
|
Virginia Tech
|
Figure 5-9. Frequency ranges for the full scale sample showing distances of 9 to 15 feet
In addition to Figures 5-8 and 5-9, the sum of the amplitudes for each distanceโs frequency range
was graphed against the distance. It was expected that at the distance increased, the sum of the amplitudes
would decrease. This expectation was due to the commonly observed nature of elastic energy versus
distance behaving in a logarithmic nature, similar to the Richter scale (Boore, 1988). The resulting
amplitude versus distance curve can be seen in Figure 10. Also graphed is the expected (general shape,
not actual values) shape of the curve. While the expected amplitude of the energy source was expected to
decrease significantly as the experiment progressed and the distance between the source and geophone
increased, the experiment resulted in a fairly consistent energy distribution. While this is not expected, it
can potentially indicate that the distance between the geophone and energy source is fairly independent,
and a fairly reliable frequency range created at any location along the sealโs face.
Figure 5-10. Expected and observed response curves of the amplitude of the frequency ranges versus the
distance between the geophone and energy source
51
|
Virginia Tech
|
5.5 Discussion
Both the full-scale and large-experiments conducted at the Georgetown mine fell short of some of
the expected results based on previous small-scale experiments led by researchers at Virginia Tech. Some
of the short-comings that occurred could have been related to the roof bolting and maintenance
procedures around the samples during the day data collection occurred. This background noise made it
difficult for the energy source to provide a unique and distinguishable presence in the frequency response
range. For some of the samples, eight of the 12 total samples, the correlation between the background
reading originally taken with no induced energy source and the average frequency range for the sample
was great than 0.95, indicating nearly identical shapes in the frequency ranges, making distinguishing
features difficult to identify. Specifically, of the Manufacturer A samples, all four samples corresponded
to the background frequency with correlation values greater than 0.85. It is believed that the noise
commonly associated with mine activity was responsible for some of these high correlation values and
some of the inconclusive findings. This interference is likely to occur at nearly every underground mine
making the single sonic wave NDT method difficult to use outside of laboratory conditions experienced
by the small-scale samples. The full-scale experiment did show that if the energy traveling through the
full-scale seal seemed be independent of the distance between the energy source and the geophone. It is
also worth considering expanding the observed frequency range of the geophone to look at frequencies
higher than 400 Hz. If resolution can be reached where frequency ranges can be used to distinguish
differences between different seal material types and conditions the distance between the energy source
and geophone will have little effect. Overall, the large-scale experiments did not reproduce some of the
results seen in smaller samples in laboratory settings, but the full-scale experiment did show that a
constant and large amount of energy can be applied to full-scale mine seals and measured by a single
geophone.
52
|
Virginia Tech
|
Chapter 6: Technical Note: Modeling the Movement of
Perfluoromethylcyclohexane (PMCH) through Underground Mine
Seal material with PCF3D and Avizoยฎ
6.1 Abstract
With the MINER Act requirement of seal strength in underground coal mines of 50 psig in
monitored and 120 psig in unmonitored areas, a series of Non-Destructive Testing (NDT) methods are
being developed to assess the integrity of these seals. One of the NDT methods being researched, and the
purpose of this paper, is the use of tracer gases to monitor the integrity of the in-situ mine seals used
underground coal mines. There have been some doubts raised about the ability of these high density trace
gases to move large distances through mine seals. Initially, tracer gases were introduced with the
assumption that they would travel through discontinuities, and their presence on the active side of the seal
would indicate a compromise in the integrity of the seal. However, multi-scale testing indicated that two
different seal materials are actually permeable to tracer gases. The following paper briefly describes the
modeling of tracer particles (perfluoromethylcyclohexane) using PFC3D (discrete element modeling)
software, and Avizoยฎ 3-D visualization software to observe the interaction of the tracer gas particles and
the seal material and assess the permeability of intact seals to tracer gases.
6.2 Introduction
In the United States, there are over 14,000 mine seals installed in active U.S. coal mines, with
more being installed each day (Zipf, Sapko, & Brune, 2007). Due to the increasing number of seals and
the recently strengthened design criterion of the seals, it has become increasingly important to actively
monitor the condition of the seal itself, as the seals are expected to last the duration of mine life. The idea
of looking at the structural integrity of an object without damaging or affecting the integrity of the object
is a process known as non-destructive testing (NDT). One of the first uses of NDT testing technology
applied to concrete-based structures was in 1960 and involved the use of beta emissions and measurement
of the background scattering through concrete structures (United States of America Patent No. 2939012,
1960). In the years since, there has been a wide array of other technologies used, including visual
examination, liquid penetration, magnetic, radiography, ultrasonics, and eddy currents (AP Energy
Business Publications). Tracer gases have been almost exclusively used in surveying the ventilation of
buildings, mines, and other airways (D'Ottavio, Senum, & Dietz, 1988). Tracer gases are not naturally
occurring, non-toxic, and capable of being detected at small amounts (parts per billion). While tracer
gases have not specifically been used for NDT studies, certain tracers (mostly perfluorocarbons) have
been used to monitor CO leakage through brine-bearing sandstone formations (Phelps, McCallum, Cole,
2
Kharaka, & Hovorka, 2006). This idea of using tracers to monitor gas movement through solid media is
the premise for a NIOSH (National Institute for Occupational Safety and Health) sponsored research
project currently being performed at Virginia Tech (VT). Research at VT has included both small and
large scale testing with promising results. The results from the modeling software and simulations
described in the paper will be used to assist assessing the feasibility of using a tracer gas (specifically
perfluoromethylcyclohexane (PMCH) seen in Figure 6-1) as a novel, tracer-based NDT method.
The PFC3D (particle flow code) model documented in the paper involves the use of discrete
element modeling (DEM). The theory or foundation of discrete element modeling of particles was
formulated by Isaac Newton in 1697, but the method was established in 1971 by P.A. Cundall. Using the
DEM method, Cundall modeled and studied the rock mechanics of jointed rocks (Zhao, Nezami, &
Hashash, 2006). Firstly, Cundall developed a numerical modeling code to model the deformation of two-
dimensional blocks and translated his code into a Fortran computational language. He then created
multiple versions of code using Fortran including SDEM and CRACK to model the fracturing of blocks
53
|
Virginia Tech
|
under loading (Jing & Stephansson, 2007). Itasca established its FLAC and PFC software in 1986 and
1994 respectively (History of Itasca). PFC is used to model the dynamic behavior of particles and the
interaction between those particles. Using PFC, particles can be modeled as a uniform body. Particles are
grouped and assigned properties including density, porosity, shear strength, compressive strength, contact
or parallel bonding strength, and frictional characteristics. The movement of particles is then modeled
through the application of gravity or as a defined force acting in a specified field or direction.
Traditional microscopy, whether optical or electron, allows two-dimensional images to be
constructed of a specimenโs surface features or thin slices of the sample. X-ray tomography (micro-CT)
can produce three-dimensional images of structures by on collecting a series of two-dimensional X-ray
images. The process involves rotating a specimen to create a large amount of X-ray images around a
single slice and then a three-dimensional image can be generated. The images generated by the X-ray
beam and detector simply measure the amount of X-ray absorption and scatter within the slice of the
specimen. Based on the sorption and scatter, inferences can be made about the material underneath the
surface of the structure such as density, material type, size, etc. (SkyScan N.V., 2005).
Avizoยฎ is a three-dimensional visualization software developed by FEI. Specific to this project
Avizoยฎ Fire was used for the seal material model. Avizoยฎ Fire allows users to do tomographic analysis,
crystallography, microstructure evolution, core sample analysis, and many other analyses. The primary
feature of Avizoยฎ Fire is to create three-dimensional models from images, but it can also extract variable
features, explore data in three dimensions, and measure and quantify over 70 different measurable
(volumes, areas, aspect ratios etc.). Additionally, it can simulate naturally occurring properties such as
permeability, electrical resistivity, and thermal conductivity (Visualization Sciences Group, 2014). An
example of Avizoยฎ being used in a similar method is using the software to quantify and map pore
pathways in Opalinus clay. A small group of field samples were analyzed with Avizoยฎ and simulations
were used to determine average pore size and permeability pathways within in the samples. These
pathways were then mapped to better quantify the microstructure and transport properties of these clays
(Keller, Holzer, Wepf, & Gasser, 2010).
Through a series of tests involving small and large scale experimental apparatuses, it has been
consistently observed that PMCH is permeable through the concrete-like seal material. The use PFC3D
and Avizoยฎ are to assist in verifying these physical observations. Using the DEM tool PFC3D and the
three-dimensional visualization tool Avizoยฎ, the PMCH particles can be applied to the block of seal
material. The PFC3D model can show the movement of the particles and how the seal material affects
them. Avizoยฎ visualization software is used to create a three-dimensional representation of a sample of
seal material from micro-CT (computed tomography) scans, and simulate the movement of PMCH
through the sample to determine permeability values.
Figure 6-1.Three-dimensional geometry of PMCH (C F ) (grey=Carbon and green=Fluorine)
7 14
54
|
Virginia Tech
|
6.3 PFC3D Simulation Procedure for PMCH Movement within the Mine Seal
To model the seepage or displacement of tracer gas particles through a mine seal, a three-
dimensional mine seal was created using PFC3D. The boundary walls of the seal were first developed and
assigned specified normal stiffness and shear stiffness values (from Itascaโs block cave demo model) and
a coefficient of friction of 1.0. For this simulation, the seal was created and modeled as a cube and length
measurements were recorded in nanometers. The seal was then populated with spherical particles
representing the grains of the concrete seal material. A porosity component was formulated and added to
the modeling code to ensure the volume of each of the spherical concrete grains within the boundary
boxes produced the input porosity of the material. Using the total enclosed volume within the seal
boundaries and the porosity of the seal material, a radius was then assigned to each sphere to simulate
apparent void space within the concrete material. Parallel bond normal and shear strength values were
then assigned to the concrete grains to simulate the cementation or lack of rotation between grains.
The porosity of the seal material used for this analysis was determined from laboratory testing.
An effective porosity test was completed in laboratory settings by measuring the dimensions of two
cylindrical samples of seal material and then weighing the mass of the samples. The samples were then
submerged in a water bath within a container, and a vacuum was induced to that container. This allowed
the air within the material to be pulled out, and the sample itself to become fully saturated. The samples
were left in the container under vacuum for approximately 24 hours. The saturated samples were then
removed from the container and excess water was lightly removed from the surface. The saturated seal
material samples were then weighted to determine the saturated weight. Based on the differences in mass
of the saturated and unsaturated samples, and the assumption the density of the water used was 1.00 gram
per cm3, the average porosity of the two samples was determined to 14.75%, which was the value used for
the PFC3D model. The determined density (ฯ) of the material was 4.8 g/cm3. Figure 6-2 shows the
vacuum container, water bath, and seal material used for the experiment. The tiny air bubbles seen in the
figure are the air pockets from the pore space in the material being evacuated by the vacuum.
Figure 6-2. Seal material samples during effective porosity test. Photo by author, 2013
Following the construction of the mine seal model, a tracer gas holding tank was designed above
of the seal face. The walls of the tank were designed to be frictionless to allow for the ease of gas particle
movement into the seal material. PMCH particles with radius equal to 0.307 nanometers were then added
to the tracer holding tank. The radii of PMCH gas particles were determined through WebMo chemical
structure modeling. This software allows the user to draw compounds, input an energy model to minimize
the strain between the atoms of the compound, and measure the geometry (distance and angles) of the
55
|
Virginia Tech
|
optimized particle. Using WebMo, the maximum diameter of the PMCH structure was calculated to be
6.14 angstroms or 0.614 nanometers (WebMo). Gravity was then applied to the system to allow for the
settling of the seal material and the bottom wall of the tracer holding tank was removed to allow for the
transfer of gas particles from the tank to the seal. The vapor pressure for the PMCH in model was taken
from the F2 chemical data sheet for technical grade PMCH (14 kPa) (F2 Chemicals Limited, 2011). The
model was set up in nanometer base unites, so a downward z-directional converted force of -10,327.5
N/nm3 was applied to the centroid of each PMCH gas particle. Due to the application of force, gas
particles within the tracer holding tank flow downward into the seal material. The simulation was run for
600,000 cycles or time steps to allow for the complete modeling of PMCH through the concrete seal. As a
result of z direction velocity and position histories written in the initial modeling code, the displacement
and velocity of gas particles could be analyzed over time as particles migrated through the seal material.
These values were recorded from the initialization of gravity on the system. Using this data, it was then
possible to plot the histories of the displacement and velocity of three tracer particles as they travel
through the underground seal material to better understand their flow paths.
6.4 PFC3D Results
The resulting PFC3D model described above can be seen below in Figure 6-3 with its basic
geometries. The large red spheres represent the small 200 nm x 150 nm x 150 nm block of seal material.
Because of the vast difference in particle sizes (the seal material and PMCH) a small dimension of seal
material had to be chosen, considering the computational time required for a larger model, and the
limitation of the number of spherical particles allowed by the PFC3D demo version. The blue specks
above the seal are particles, set with PMCH vapor properties.
Figure 6-3. Geometry of the PFC3D model from front (left) and angled (right) views
After gravity was removed from the model (gravity is applied to allow settling of the seal
material) the vapor pressure was then applied to force the particles through the seal material. This force
was designed to verify the movement of the tracer particle in a sample with no additional pressure
differential forces, relying simply on the natural vapor pressure of the compound to propel the particle
thought the seal material. The model records the histories at three PMCH particles at various starting
heights above the seal material. Because the PMCH particles are randomly arranged in the space above
the seal, the model will chose the particle closest to the three elevation points chosen. These elevations
were 10, 30, and 50 nm above the surface of the seal. The model was then run for 250,000 cycles for
slightly over 38 minutes (each second containing approximately 109 cycles). Below in Figure 6-4, the
position of the three particles can be seen. Interestingly enough, only two of the particles reach the bottom
of the seal material, indicating traveling through the entire length of material, while one particle (red)
reached an equilibrium point, or remained stuck about one-third of the way through the sample.
56
|
Virginia Tech
|
Figure 6-5. Graph of the velocity of the PMCH particles from all heights above the surface of the seal
material, 10 nm (blue), 30 nm (red), and 50 nm (green)
Looking at the velocities in Figure 6-5, the maximum absolute velocity occurs prior to the particle
reaching the surface of the seal material. Specifically when looking at the sample particle that travels
from the 50 nm point (green in Figures 6-4 and 6-5) and travels through the material, the movement of the
particle is drastically impeded by the seal material. This can be seen in detail below in Figure 6-6. Figure
6-6 also shows the raw PFC3D histories.
Figure 6-6. Detailed movement of a PMCH particle through the seal material
6.5 Avizoยฎ Simulation Procedure for PMCH Movement within the Mine Seal
In order to supply the Avizoยฎ 3-D Visualization software with the model needed to simulate and
measure the permeability of PMCH through the seal material, it was necessary to conduct a Micro-CT, or
x-ray microtomography, scan of some of the seal material. A small (approximately 0.9 cubic inches)
amount of seal material sample was mixed and then placed in a plastic test tube vial to allow curing to
take place. After a week of curing, the plastic around the sample was broken, and the seal material sample
58
|
Virginia Tech
|
was taken to the micro-CT scanner, a SkyScan 1172 desktop model. Figure 6-7, below, shows the seal
material sample sitting in the scanner.
Figure 6-7. Seal sample in the SkyScan 1172. Photo by author, 2014
The scan conducted of the seal material took a total of 3 hours and 28 minutes and produced a
total of 861 images, or slices. Each slice contains a two-dimensional cross-sectional image, approximately
16.7 mm by 16.7 mm. The SkyScan 1127 model uses a 1.3Mp camera with a resolution of 3 microns. The
source current runs at 100 kV. A 360 degree rotation was completed around the sample, at a rate of .75
degrees per step. A total of 480 steps were completed per slice. Each slice was reconstructed in the
SkyScan software and then exported to Avizoยฎ in the form of 16-bit TIF files. Some of resulting images
from the scanner can be seen in Figure 6-8.
Figure 6-8. TIF images collected from the SkyScan 1172. The diameter of samples shown is 1.44 cm.
59
|
Virginia Tech
|
The resulting image files were then imported into the Avizoยฎ software and used to create a three-
dimensional model representing the seal material. The 861 16-bit TIF files were imported into the
Avizoยฎ software, and from there a three-dimensional model was constructed (Figure 6-9). To test for
permeability, a few filters had to be applied to the model. Firstly, a median filter was used to delineate the
boundary conditions for the model. The Avizoยฎ โDespeckleโ command was then used to remove some of
the naturally occurring and artificially created randomness from the pixels. Based on the color of the
pixels located throughout the model, different material was labeled or assigned to the pixels. The three
major materials labeled in the model were โ air or pore space, solid material, and high-density (lighter)
material. Then the pore network could be rendered and a skeleton network could be created. The
permeability test run in Avizoยฎ was then applied, where the inputs are the inlet and outlet pressure, the
density, and viscosity of the fluid (it is assumed tracer gas behaves as a liquid in the model).
Figure 6-9. Avizoยฎ model constructed from micro-CT image files. http://www.vsg3d.com/avizo/fire. Used
under Fair Use, 2014
6.6 Avizoยฎ Results
The simple Avizoยฎ model was used to determine if permeability would be possible between the
PMCH tracer and the seal material. Unlike the PFC3D model, the Avizoยฎ model used solid elements to
simulate the seal, rather than particles. Avizoยฎ created a sub sample from the Micro-CT scan files, and
tested for permeability, assuming the sample and PMCH are isothermal, the sample has a singular
porosity and permeability, and flow occurs under the governance of Daryโs Law. Also, the PMCH
through the Avizoยฎ model will be based on kinematic viscosity rather than vapor pressure as a force and
particle size. Instead, the Avizoยฎ model uses the vapor pressure as a pressure differential between the
pressures applied to both the inlet and outlet side of the seal sample. The other four side of the Avizoยฎ
seal model are bounded by walls that provide no pressure, but are impermeable to flow. A vapor density
(0.0543 lb/ft3 at Standard Pressure and Temperature) and kinematic viscosity (0.873 mm2/s) was found
from a chemical data sheet for technical grade PMCH (F2 Chemicals Limited, 2011). The resulting
permeability values from the simulation can be seen below in Table 6-1. Figure 6-10, shows some of the
permeability simulation generated by the Avizoยฎ software.
60
|
Virginia Tech
|
Table 6-1. Avizoยฎ permeability simulation inputs and results
Simulation Inlet Pressure Outlet Pressure Kinematic Permeability
Number (Pa) (Pa) Viscosity (mm2/s) (millidarcy)
1 124,000 118,000 0.873 18.4
2 150,000 130,000 0.873 4.10
Figure 6-10. Permeability test in the Avizoยฎ model. http://www.vsg3d.com/avizo/fire. Used under Fair Use,
2014
6.7 Conclusions
Both models mentioned in the paper provided a few notable mechanics regarding the movement
of PMCH compounds through models made to represent a block of MSHA approved seal material.
Firstly, in the PFC3D model, not all of the particles made it through the length of the seal material. Some
particles were trapped in the void space that naturally occurs in the seal material, or the PMCH became
adsorbed, or bonded to the seal material particles (based on the Itasca shear and normal bond values for
rock in the Block Cave demo). Secondly, in the PFC3D model, the movement of the PMCH continued
through the seal material at a variable, but slower rate than in the open atmosphere. The Azivo model
demonstrated that using pressure differentials on two sides of the seal material can produce movement of
PMCH though the seal. This movement can be quantified as permeability, and the model produced values
similar to those seen in sandstones (5 to 15 mD) (Dutton & Willis, 1998). Both of these models, while
rudimentary in some respects, confirm some of the field work done by Virginia Tech and support the
theory that intact seals are permeable to tracer gases. The implications for this is that the changing flow
and concentration of these tracer gases can be used to detect structural concerns within in-situ mine seals.
6.8 Acknowledgements
The models in the paper were completed with the help of Drew Hobert, student and research
associate with Virginia Tech, and Joseph Amnate, student and research associate with the Virginia Center
61
|
Virginia Tech
|
for Coal and Energy Research. The material used for both the porosity test and CT-scan were provided by
Mike Fabio of Strata Material Worldwide. Without their help, this paper would not be possible.
Chapter 7: Summary and Conclusions
This thesis describes the need for underground mine seals in coal mines, the need to assess the
integrity of the structures using non-destructive (NDT) methods, and provides assessment of two methods
that can potentially be used to identify issues within the seal โ sonic waves in an impact-echo method
and perfluorinated tracer gases moving through the seal material. The two methods described in this paper
include one proven method (sonic waves) and another novel method (tracer gases) that has not been used
as a NDT tool for cement โlike structures.
For the sonic wave experiments, the small-scale laboratory experiments described in Chapter 3
outlined how a single geophone can be used to identify structural differences in small blocks of seal
material designed to have engineered issues such as void spaces and fracture planes by applying a single
impact-based energy source to the surface of the sample. By looking at correlation differences between
the frequency ranges, it was possible to identify differences in the condition of the samples. The issue
with the single geophone impact-echo NDT method became apparent in the large and full-scale
experiments detailed in Chapter 5. When transitioning the experimental design from the laboratory setting
at Virginia Tech to the large samples in Kentucky, the background noise present in the underground mine
environment became too large to distinguish the energy pulse from the impact source. The movement of
equipment, movement of the rock mass as mining progresses, and structural maintenance of the mine
(roof bolting, scaling, blasting, etc.) are all potential sources of background noise that are almost
unavoidable when working in underground mines. One of the good technical notes taken from the full-
scale experiment is that it appears as if the distance between of the geophone and energy source is fairly
independent of the amplitude of energy propagating through the seal. Overall, the trouble with
background noise in mining environments appears to be the largest factor preventing successful use of the
sonic wave impact-echo NDT method.
For the novel NDT method used for experimentation and confirmations in Chapters 3, 4, and 6
there are several important findings from the small, large, and full-scale experiments and modeling. The
small-scale experiments confirmed that perfluoromethylcyclohexane (PMCH) would be an appropriate
tracer gas to use for the experiments, compared with a traditional tracer, sulfur hexafluoride. The small-
scale experiments of Chapter 3 also confirmed that, on the small-scale, it is possible for the heavy
molecular weight of the PMCH to move through solid seal material without interaction or escape paths
with any boundaries. This was confirmed through two separate computer modeling examples in Chapter
6, and while no quantified values were taken from the models, the simulations did confirm that it is
possible for PMCH compounds to move through solid seal material structures. Perhaps one of the
significant chapters in this paper, Chapter 4 provided both large and full-scale experiments to support that
claim that PMCH can successfully be used as a tracer gas to indicate an increase in the discontinuities or
fracture network found within the seal material. The full-scale Kentucky experiments also helped confirm
the movement of the compound through solid seal material, and also the potential installation of PMCH
Passive Release Source (PPRS) within the seal material itself to release the tracer at the center of the seal.
Overall, the tracer NDT method experiments, although novel, proved to be a valid potential option for
monitoring the integrity of these mine seals in terms of fractures and discontinues forming within the seal
as the life of the structure progresses. It is yet to be seen how the present of void space or an improper
density mixture of the material may affect movement, as well as how samples should be collected and
monitored from in-situ underground seals, but the trace method does show significant promise and
support for further research. Since intact seals are permeable to tracer gas movement, presence of the
tracer alone does not indicate a compromised seal, complicating the use of tracer gases as a NDT method.
62
|
Virginia Tech
|
Chapter 8: Future Works
While the findings of this project documented do not lend conclusive support to some of the NDT
methods, there are some additional experiments and projects might further the application of both
methods. Both the sonic wave and tracer gas non-destructive testing (NDT) method experiment have
indicated potential success in evaluating the condition and integrity of underground mine seals, and
additional testing may help prove that the sonic wave method is feasible in mine environments, and the
tracer gas method may be ready to install in an in-situ mine seal.
The background noise present in the Kentucky underground mine prohibited the advancement of
the sonic wave impact-echo method, although the method has been documented in civil studies. Some of
the possible improvement or modifications to the experiment that might help are the replacement of the
geophone with a sensitive MEMS (microelectricalmechanical system) accelerometer, adjusting the energy
source to a range of frequencies, or explore additional NDT methods for evaluating the seals.
It terms of continuing tracer gas NDT method research, the next step seems to be the design of
some sampling system, whether that be Tygonยฎ tubing, solid phase micro extraction (SPME) fibers, or
taking vacutainer samples from the face of the seal. Introducing a series of sampling tubes with the seal
might be a potential integrity issue for the seal, and seeing as maintaining the required overpressure
strength is one of main concerns of these seals, it may be necessary to test the integrity and failure criteria
of a full-scale seal equipped with sampling tubes and ports. Further study of the permeability of seals to
tracer could allow for assessment of integrity based on the rate of tracer movement or concentration, as
long as the atmosphere conditions as the seal are well understood.
One of the interesting findings from this project that may become groundwork for additional
research is the movement of high molecular weight perfluoromethylcyclohexane through the seal
material. It is generally assumed that mine seals prevent the area of high methane from migrating to the
working sections of the mine. Methane (CH ) is a much lighter, smaller molecule than PMCH (C F ) and
4 7 14
could possibly travel through the seal material. It is possible that future investigation should explore
whether or not pockets of methane found at the face of mine seals are products of leaking around the
boundaries (as always assumed). It has long been observed that mine seals โbreatheโ with pressure
changes and assumed that the exchange occurs at the boundary of the seal and strata. While it is likely
that this is the primary mechanism, seal permeability may also be a contributor.
64
|
Virginia Tech
|
Passive Seismic Tomography and Seismicity Hazard Analysis in Deep
Underground Mines
Xu Ma
ABSTRACT
Seismic tomography is a promising tool to help understand and evaluate the stability of a rock
mass in mining excavations. It is well known that closing of cracks under compressive pressures
tends to increase the effective elastic moduli of rocks. Tomography can map stress transfer and
redistribution and further forecast rock burst potential and other seismic hazards, which are
influenced by mining. Recorded by seismic networks in multiple underground mines, the arrival
times of seismic waves and locations of seismic events are used as sources of tomographic imaging
surveys. An initial velocity model is established according to the properties of a rock mass, then
velocity structure is reconstructed by velocity inversion to reflect the anomalies of the rock mass.
Mining-induced seismicity and double-difference tomographic images of rock mass in mining
areas are coupled to show how stress changes with microseismic activities. Especially,
comparisons between velocity structures of different periods (before and after a rock burst) are
performed to analyze effects of a rock burst on stress distribution. Tomographic results show that
high velocity anomalies form in the vicinity of a rock burst before the occurrence, and velocity
subsequently experiences a significant drop after the occurrence of a rock burst. In addition,
regression analysis of travel time and distance indicates that the average velocity of all the
monitored regions appears to increase before rock burst and reduce after them. A reasonable
explanation is that rock bursts tend to be triggered in highly stressed rock masses. After the energy
release of rock bursts, stress relief is expected to exhibit within rock mass. Average velocity
significantly decreases because of stress relief and as a result of fractures in the rock mass that are
generated by shaking-induced damage from nearby rock burst zones. The mining-induced
microseismic rate is positively correlated with stress level. The fact that highly concentrated
seismicity is more likely to be located in margins between high-velocity and low-velocity regions
demonstrates that high seismic rates appear to be along with high stress in rock masses. Statistical
analyses were performed on the aftershock sequence in order to generate an aftershock decay
model to detect potential hazards and evaluate stability of aftershock activities.
|
Virginia Tech
|
1 Introduction
To detect seismic hazards and improve safety in mines, a great number of seismic networks
have been developed and used with an increasing depth of excavations. Considerable data sets are
generated because of the good coverage of seismic monitoring systems in mines. It is an important
issue to continue improving the extraction and analysis of the information from mining-induced
seismicity and to comprehensively assess the seismic hazards. The professional evaluation of
seismic risks of mines requires the incorporation of geophysical analysis methods and tools
building on the fact that mining-induced seismic monitoring systems are designed based on the
experiences of using seismic networks in natural earthquakes. Besides statistically-based studies
on mining-induced seismicity, passive seismic tomography provides the engineer with an
understanding of how stress changes temporally and spatially. Such information is fundamental
for detecting potential rock seismic hazards and improving safety in mines.
The time-dependent analysis of the passive seismic tomography of the mines reveals changes
in the relative state of stress in a rock mass. Passive seismic tomography is an imaging technique
used to evaluate the velocity distribution of the P wave which crosses through the rock mass. P
wave velocity varies with the different bulk moduli of a rock mass. For the same area, bulk
modulus change is mainly impacted by stress state depending on both the geological structure and
mining activities including excavating, blasting, and rock supports. Thus, velocity change in the
rock mass reflects the stress distribution and redistribution.
Widespread application of microseismic monitoring systems guarantees the availability of
tomography. Microseismic monitoring is a way of remote sensing to discriminate between the
geological structures including shear zone and fault zone, as well as other discontinuities.
Generally, the initial goals of microseismic monitoring are to identify the geological structure and
potentially unstable rock mass by locating the seismic events. However, microseismic monitoring
is less comprehensive since the seismic events oriented analysis is only restricted on the discrete
events related area. Passive seismic tomography provides a way to further utilize the information
of microseismic activities to yield velocity distribution around the mine area. According to the
velocity distribution displayed on tomography, the relative state of stress would be evaluated based
on how the tomographic images change. Associated with the in situ stress state measured by
1
|
Virginia Tech
|
2 Literature Review
2.1 Seismicity and Rock burst Potential
An earthquake is a result of sudden shear planar failures in rock material (Aki and Richards
2002). Earthquake ruptures start at the hypocenter, the origin point propagating seismic waves on
the fault-plane leading to the dislocation of two sides of the fault surface. The rupture of
earthquakes excites seismic waves, including primary, compressional (P-waves) and secondary,
shear waves (S-waves). Earthquakes are subdivided into shallow, intermediate, and deep
earthquakes in terms of the depth of hypocenter. Evidence shows that shallow events exhibit a
good grouping property following time (Kagan 1994).
Focal mechanism analysis is a standard method in seismology. It indicates the sense of first
motion of P-waves, solid for compressional motion and open for dilatational (Kagan 1994). The
mainshock is the large event that dominates the sequence of events. The sequences of small
earthquakes following large earthquakes are called aftershocks. Also, earthquakes are preceded by
foreshocks, which consist of weaker events. Foreshocks are usually much less frequent than
aftershocks. Stress transfer is the main cause of aftershocks in the adjacent regions of a mainshock.
Aftershocks relax the excess stresses, which are caused by a mainshock (Shcherbakov, Turcotte et
al. 2005).
Theoretical understanding of the earthquake process and interrelations between earthquakes
are analyzed by using statistical methods. Standard analytical methods are not applicable because
of the intrinsic randomness of seismicity. Kagan (1994) proposed that statistical models are used
to explore seismic properties, including stochastic, multidimensional tensor-valued, and point
process. Statistical methods are necessary for seismicity research because of the randomness of
seismicity. Case studies of earthquakes can only provide understandings for partial earthquake
sequences. Seismicity can be observed and displayed with long-term and long-range correlations
by using statistical analysis (Kagan 1994).
One of the methods widely used in seismicity study is detailed analysis of seismograms, which
can provide a complex internal temporal and spatial structure of earthquake sources. Geometry of
faults could be observed by geological investigations. Preseismic and postseismic deformations
are usually involved with statistical analysis combined with seismic events if abundant and
uniform coverage of seismic events are available (Kagan 1994). The most prominent feature of
3
|
Virginia Tech
|
earthquake catalogs is the spatial and temporal grouping of earthquakes. Earthquake groups are
divided into foreshock, mainshock, and aftershock sequences. Utsu (2002) provides examples to
demonstrate distribution of various earthquake sequences. Also, he mentioned that the aftershock
epicenters scatter more widely than mainshock epicenters (Utsu and Seki 1954). Utsu and Seki
(1955) introduce the relation between earthquake magnitude and the aftershock area as:
๐๐๐๐ = ๐ โ3.9 ( 2-1 )
๐
where S is the aftershock area in km2 for a magnitude M .
m
Aftershocks are located near the margin of the fault areas which have experienced dislocation
and displacement. However, as mentioned before, the surface rupture length and magnitude fail to
keep the same consistency for the earthquake (Utsu 2002). Some studies obtain the empirical
relationship between the magnitude and the length of fault rupture and show that the coefficient of
magnitude ranges from 0.78 to 1.22 (Wells and Coppersmith 1994). For smaller earthquakes, the
study is performed by Iio (1986) and shows:
๐๐๐๐ฟ = 0.43๐ โ1.7 ( 2-2 )
๐
where L is the length of aftershock area for a mainshock with magnitude M
m.
The reason aftershock occurs is that the mainshock increases regional stresses, which lead to
the subsequent aftershocks (Shcherbakov, Turcotte et al. 2005). According to the dislocation
model, regional stresses arise from the strain accumulation and release. It is widely known that
aftershocks are the stress transfer to the vicinity of the hypocenter during the occurrence of
mainshocks. Aftershocks are associated with the relaxation of stresses arising from mainshocks.
Regions around the rupture of a mainshock are associated with stress increase that are greater than
the yield stress. The aftershocks after the mainshock lead to stress relaxation associating with
microcracking until achieving the yield stress. This transient process is interpreted by using
damage mechanics combined with Omoriโs law (Shcherbakov, 2005).
Aftershocks satisfy the following laws (Shcherbakov, Turcotte et al. 2005):
1. It is found that Gutenberg-Richter frequency magnitude relation can be applied to the
aftershocks sequence.
4
|
Virginia Tech
|
2. According to Bรฅthโs law, the magnitude difference between a mainshock and its greatest
aftershock keeps roughly constant. The constancy corresponds to the scale-invariant
aftershock along with mainshocks. That is:
โ๐ = ๐ โ ๐๐๐๐ฅ ( 2-3 )
๐๐ ๐๐
where m is the magnitude of the mainshock, ๐๐๐๐ฅ is the magnitude of the largest aftershock,
ms ๐๐
ฮm is approximately equal to 1.2.
3. The temporal decay rate of occurrence of the aftershock sequence follows the modified
Omoriโs law.
As the edge or face of the stope is advanced, the rock in the vicinity of the excavations is
stressed approaching or even beyond its elastic limit, leading to inelastic deformation of the rock
mass (McGarr 1971). It is well known that mine excavations influence the virgin state of stress
and even lead to stress concentrations in the rocks. Seismic pulses are triggered when the rocks
subject to stress approach or exceed the strength of the rock. The occurrences of seismicity are
associated with the mine excavations and can be forecasted by using statistical analyses (Cook,
1976).
Rockbursts are mining-induced seismic events (1.5 < M < 4.5) that lead to violent and dynamic
failure of the rock mass. Mining excavation on highly stressed rock mass can cause stress
distribution and result in rockbursts (Young, Maxwell et al. 1992). It is shown that the occurrences
of seismicity follow the progress of active mining. However, the seismicity cannot directly reflect
the major faults in the mined area (Kusznir, Ashwin et al. 1980). The analysis of tomography and
seismic events behavior with mining excavation could improve our knowledge of the influence of
stress change on the velocity structure of a rock mass. Evidence supports the view that the
seismicity and rock deformation are shown to be closely related in space and in time (McGarr and
Green 1975). Tilt is the first derivative of the subsidence profile and can show the change in
subsidence between two points. It is found that the rate of occurrence of tremors is closely
correlated with the rate of tilting in a deep level gold mine (McGarr and Green 1975). As a result
of compressional stresses with mining progresses, rock masses above and below the stopes
converge (McGarr 1976).
5
|
Virginia Tech
|
Examination of the pre-burst and post-burst seismic data suggest that, a significant increase of
microseismic events is followed by a dramatic decrease before the rock burst (Brady and Leighton
1977). To predict impending failure, seismicity has to be qualified on some conditions. A low-
modulus inclusion zone is called primary inclusion zones. It is required that seismicity increases
with the primary inclusion developing. Also, the primary inclusion zone should exhibit
anomalously long ruptures along with seismic events (Brady and Leighton 1977). The seismic
tomography study developed by Young and Maxwell implies that low-velocity regions are of low
rock burst potential and high-velocity areas are of high rock burst potential. Tomographic imaging
and induced seismicity were integrated to characterize highly stressed rock mass in rock burst
prone mines. Locations of seismicity (foreshock and aftershock of rock burst) are superimposed
on tomographic images to indicate the correlations between velocity anomalies and seismic events
by analyzing the velocity structure and seismicity. It is indicated that a destressing as a result of
rock burst of the sill pillar is represented by low velocity anomalies.
In contrast, mining-induced tremors and rock burst displayed a high velocity region (Young
and Maxwell 1992). Considerable evidence from seismic data indicate that mine tremors are
physically similar and part of them obey the same mechanisms to earthquakes. It is noted by
McGarr that many mine tremors and earthquakes are the result of shear failure along a plane
(McGarr 1971). Consequently, mine tremors and earthquakes obey the same Gutenberg-Richter
magnitude-frequency relation (McGarr 1971). Stress drops are observed in natural earthquakes
and the magnitude of stress change ranges from 0.3 to 50 MPa (Ishida, Kanagawa et al. 2010). The
earthquake stress drop is regarded as a fraction of the regional differential stress because evident
stress disturbance, such as pronounced rotation of principal stress axes, is associated with
coseismic deformation (Hardebeck and Hauksson 2001). Underground observations indicate that
an instantaneous convergence is associated with burst fractures. Convergence continues to be
followed by a period of rapid convergence, which gradually reduces to the normal rate (Hodgson
1967, McGarr 1971). It is proposed that the rate of stope convergence decreases prior to a large
seismic event, which promotes the dislocation of rock in stopes (Hodgson 1967).
The frequency of event occurrence and magnitude of events conform to the log-linear function
based on power-distribution law. A drop of ฮฒ value can indicate some major events and an abrupt
increase of effective stress prior to the occurrence of them (Young, Maxwell et al. 1992).
6
|
Virginia Tech
|
Richardson and Jordan suggested that induced seismicity in mining are divided into two kinds
of events (Richardson and Jordan 2002). The first one is high frequencies that usually swarm in
small spatial and temporal ranges. Moreover, this kind of event is triggered by the development of
fractures causing the rupture of rock mass when dynamic stresses change due to blasting and stress
perturbations in excavations. The other kind of events spread out through the active mining areas.
They mostly locate in shear zones including faults, and the analysis for them can be extrapolated
from tectonic earthquakes. Also, this kind of event is caused by friction dominated ruptures due to
the removal of ore over a long period of time rather than blasting activity. Radiated energy of this
kind of events are lower than that of the first kind of events.
Seismic velocity increases the focal volume of the primary inclusion zone with accumulating
strain energy. Because of the high seismicity rates and dense instrumentation in southern
California, there are tens of thousands of well-recorded earthquakes which can be used to infer
stress orientation, and most seismically active regions can be studied with a spatial resolution of
5-20 km.
2.2 Seismic Energy
The energy release in earthquakes is one of the most fundamental subjects to help understand
earthquakes (Kanamori 1977). Energy released by an earthquake are principally in the form of
strain energy, which includes radiated seismic energy, fracture energy and frictional heat released
on the moving surfaces during the dislocation process. Fractures are generated when the local
stress exceeds the local strength (Kranz, 1983). Fracture energy merely occupies less than 0.1% of
the total energy released. The elastic energy and heat of friction are the principal forms of energy
released (Krinitzsky 1993). The assumption, the rock mass surrounding a stope is elastic, is applied
to estimate the energy released in the area of a stope. It is proposed that rock bursts are associated
with high energy-release rates in strong, brittle rocks (Hodgson and Joughin 1966). McGarr
suggests that elastic theory fails to provide accurate understanding of the mechanism of mine
tremors because occurrences of seismicity are associated with inelastic phenomena (McGarr
1971). Evidences from underground observations show that seismicity in mining are the result of
violent shear failures across planes. Fractures in planar zones are correlated with seismic events
radiating seismic energy (McGarr 1971). By comparing densities of aftershock in an earthquake
sequence, Watanabe proposed that a significant diminishing number of aftershocks were identified
following a major event (WATANABE 1989). A quantitative assessment of energy release of a
7
|
Virginia Tech
|
rock structure under load is useful for understanding the behavior of a rock mass. Acoustic
emission (AE) and ultrasonic wave propagation are fundamental forms of energy change. Acoustic
emissions in a rock mass are mainly formed from the newly formatted cracks and sudden
development of damaged surfaces in preexisting crack (Falls and Young 1998). Also, it is
mentioned that microseismic events are triggered where the differential stress (ฯ -ฯ ), which is
1 3
defined as the in situ crack ignition stress ฯ , is highly concentrated (Falls and Young 1998).
ci
Damage mechanics is used to analyze the acoustic energy radiated with time (Shcherbakov, 2003).
Also, Shcherbakov claimed good power-law scalings regarding energy radiated with time t and
pressure were observed.
Xie proposed that the distribution of microseismic events can be characterized by fractals (Xie
and Pariseau 1993). The fractal dimension decreases with the development of fractures in a rock
mass. The occurrence of a rock burst in mines is more likely to be indicated by a low fractal
dimension. Fractal dimension (D) of the group of fractures within the rock mass is exponentially
correlated with the strain energy release (E),
๐ท = ๐ถ ๐โ๐ถ2๐ธ ( 2-4 )
1
where C and C are constants.
1 2
The relations between magnitude of earthquakes and seismic wave energy were established by
Gutenberg and Richter. Seismic energy is computed by magnitude of seismic events,
๐ฟ๐๐๐ธ = 9.9+1.9๐ โ0.024๐2 ( 2-5 )
๐ฟ ๐ฟ
where E is the seismic energy, M is the magnitude of a seismic event (Richter 1958).
L
The spatial rate of energy release (ERR) is the most suitable index to assess the relative
difficulty of mining on a particular region due to mining excavation (Heunis and Msaimm 1980).
An important characteristic of the spatial rate of energy release (ERR) is that rock bursts are
strongly associated with it (Heunis and Msaimm 1980). However, rock falls fail to pose influence
on it. It is clear from considerable research that most mining-induced seismic events are related to
geological discontinuities in the surrounding rock masses. Planes of weakness caused by the
discontinuities even if the mining-induced stress is lower than the threshold stress for fractures
developing in rock mass. Also, production blasts appear to trigger and raise the risk of rock bursts
in the surrounding rock mass.
8
|
Virginia Tech
|
Rock bursts in deep underground mines of hard rock include events triggered by high stress
and those associated with large-scale slip on faults. Events induced by high stress are evaluated to
be of lower magnitude and less damaging than large scale slip on faults (Swanson 1992). Coulomb
failure criterion is employed to assess the potential fault slip in static stress field. The Brune model
(Brune 1970) is used to estimate constant stress drop and the correlation between rupture size and
magnitude of the seismic events. Seismic energy E is estimated by using Gutenberg and Richterโs
relation
๐ฟ๐๐๐ธ = 11.8+1.5๐ ( 2-6 )
๐ฟ
where M is nearly the same as magnitude M (Gutenberg and Richter 2010). This relation is
L
considered to present a reasonably accurate estimation of seismic wave energy for earthquakes in
most cases. However, it is necessary to calibrate before applying it to great earthquakes, especially
for the great earthquakes with rupture length over 100 km (Kanamori 1977). It is not accurate to
estimate the energy based on magnitude for great earthquakes due to the little correlation between
magnitudes and rupture length in the developing process of earthquakes. The main interpretation
is that the magnitude M is determined at a short period, in which the rupture process of an
earthquake fails to accomplish (Kanamori 1977). The change of strain energy is difficult to
estimate because the amount of strain energy before and after an earthquake is unknown. Also,
strain energy change cannot be calculated arising from the fact that the absolute stress level
involved in faulting is unknown. Kanamori (1997) proposed that the minimum strain energy drop
is able to be computed by estimating the energy of seismic waves. Based on the static source
parameters of earthquakes registered by the seismic monitoring system, accurate estimation of
energy for earthquakes are approached (Kanamori 1977). The seismic moment M is an essential
o
parameter that indexes the deformation at the hypocenter. It is observed that the seismic moment
M and the fault area S follow a linear relationship (Kanamori 1977). The relation is suggested as,
o
๐ = 1.23 ร 1022๐3/2 ๐๐ฆ๐ ๐๐ ( 2-7 )
๐
where the seismic moment M is defined by ยตDS. ยต is the rigidity and D is the average offset on
o
the fault.
The correlation between the location of seismic activity and induced stresses is investigated.
Combining microseismic monitoring with the numerical modeling is an efficient way to detect
areas subject to rock burst hazards (Abdul-Wahed, Al Heib et al. 2006). Doublets are applied to
9
|
Virginia Tech
|
remove locating errors arising from the velocity model. The seismic energy of a seismic event is
recorded by the receivers as flow energy, which is carried by both P and S waves and is computed
from
F =
ฯฮฑโซ๐ก2๐2(๐ก)๐๐ก
( 2-8 )
๐ก1
where V(t) is ground velocity (m/s), ๏ฒ is the density of the medium (KN/m3), ฮฑ is the velocity of
P wave (m/s) or S wave (m/s), and t - t are signal duration.Based on the spherical wave
2 1
hypothesis, the elastic energy release of the source i is
๐ธ = 4๏ฐ๐
2๐น ( 2-9 )
๐ ๐
where R is the distance (m) between a source and a receiver. The total seismic energy of a
seismic event is
๐ธ = 1 โ๐ 4๏ฐ๐
2๐น ( 2-10 )
๐ก ๐=1 ๐ ๐
๐
where n is number of receivers recording the signal.
The most direct characteristic of stress change is the abrupt release of stress during earthquakes
(Simpson 1986). The regions of low strength and geological discontinuities such as faults are more
likely to experience the release of stress. Stress keeps accumulating until a threshold level is
achieved to cause damage, which yields a drop in stress during an earthquake. In natural
seismology, the drop of stress is followed by a recovery of stress until the next earthquake occurs.
Some examples show that the repeat time between earthquakes in the same fault is from tens to
hundreds of years. However, rock bursts occurred more often in mining since a rock mass loses
the equilibrium due to mining excavation. Both the seismic moment and the stress drop can be
interpreted in terms of the strain energy release in earthquakes. The seismic moment and the stress
drop during earthquake could be used to estimate strain energy release. The elastic strain energy
can be divided as
๐ = ๐ป +๐ธ ( 2-11 )
where ๐ป = ๐ ๐ท๐ is the frictional loss and E is the wave energy. ๐ is the frictional stress during
๐ ๐
faulting. The difference of the elastic strain energy W before and after an earthquake based on
the elastic stress relaxation model is
๐ = ๐DS ( 2-12 )
10
|
Virginia Tech
|
where ๐ is the average stress during the form of fault, D is the average offset on the fault, and S
is the area of the fault. According to the fact that stress drop ฮ๐ is approximately equal to 2ฯ if
the stress drop is complete, there is
1
๐ = ๐ = โ๐DS = (โ๐/2๐)๐ ( 2-13 )
๐ ๐
2
where ๐ = (3โ6) ร 1011 for crust-upper mantle conditions. W is the minimum strain energy
o
drop in an earthquake.
It is demonstrated that there is a difference in energy released in earthquakes by using different
methods (Kanamori 1977). The energy released proposed by Kanamori (1977) is higher than that
calculated by the Gutenberg-Richter method for great earthquakes. However, the trend of energy
change and the number of earthquakes displays a good correlation.
The degree of aftershock activity involved with the mainshock could be represented by
โโ E /E ( 2-14 )
i=1 i m
where E and E are the energy of the mainshock and the ith largest aftershock, respectively. E
m i 1,
the energy of the largest aftershock, is proportional to the total energy; E /E denotes the
1 m
aftershock activity (Utsu 2002). Compared with aftershock, foreshocks are known as infrequent
and various. Considerably large earthquakes are not preceded by foreshocks. All aftershocks
contribute to decrease regional stress depending on the magnitudes of the aftershocks. It is believed
that the energy radiated in aftershocks is possibly more than the elastic energy transferred to the
higher stress region due to the mainshock. The main reason is that stress drop with aftershocks,
triggered by the stress increase, is greater than the increasing amount of stress transferred
(Shcherbakov, 2005). Damage mechanics are used to interpret the decay rate of aftershocks based
on the hypothesis that stress transfer enhances the stress ฯ and strain ฦ exceeding the yield stress
ฯ and yield strain ฦ in vicinities to the occurrences of mainshocks. The applied strain ฦ is defined
y y o
as a constant and the stress ฯ decreases to the yield value ฯ due to stress relax during damage of
y
aftershocks. The damage variable ฮฑ is
ฯโ ฯ = E (1โ ฮฑ)(ฮตโ ฮต ) ( 2-15 )
y 0 y
with ฯ = E ฦ , where E is the Youngโs modulus of the material.
0 y 0
11
|
Virginia Tech
|
2.3 Aftershock
Mainshocks in an earthquake sequence are followed by aftershock sequences. The mainshock
in an earthquake sequence has the largest magnitude. Aftershocks cannot have larger magnitude
than the original mainshock, otherwise the original mainshock should be defined as a foreshock.
The occurrences of aftershocks are triggered by the regional stresses increasing arising from the
mainshock. Aftershocks play a role of relaxing the excess stress caused by a mainshock
(Shcherbakov, 2005). In order to perform a reliable aftershock hazard assessment, it is essential to
determine the pattern of aftershocks, analyze variations of seismicity parameters, and detect
aftershock rate changes.
The b-value is involved in seismic hazard analysis to monitor the potential uncertainties of
seismicity. It is known that the ratio of the number of large magnitude to small magnitude
earthquakes follows
log๐ = ๐ +๐๐ ( 2-16 )
where N is the cumulative number of earthquakes of magnitude M or greater, and a is an empirical
constant (Gutenberg and Richter 1956). According to global and regional surveys of earthquakes,
b-values of the Gutenberg-Richter relationship are generally limited to -1ยฑ0.2 (Wesnousky 1999).
Another method to estimate the b value is using magnitude of earthquakes,
๐ = ๐๐๐ ๐/(๐โ๐ ) ( 2-17 )
๐ง
where M is the mean magnitude of earthquakes of M โฅ M , M is the threshold magnitude.
z z
Some studies have validated the Gutenberg-Richter relationship. The Gutenberg-Richter law
is used to ascertain the relationship between the cumulative number of aftershocks with
magnitudes greater than m. Examples refer to Landers earthquake (m = 7.3, June 28, 1992),
Northridge earthquake (m = 6.7, January 17, 1994), Hector Mine earthquake (m = 7.1, October 16,
1999), and San Simeon earthquake (m = 6.5, December 22, 2003). Although the b-value changed
slightly for different earthquakes, the data presented a good linearity between the cumulative
number of aftershocks and magnitudes (Shcherbakov, Turcotte et al. 2005).
12
|
Virginia Tech
|
Isacks and Oliver (1964) claimed that it is reasonable that the hypothesis of the constant b
value is used to predict the earthquakes in the future by extrapolating to higher magnitudes based
on frequency magnitude relations. Extrapolation of the relation to magnitude supports that an
earthquake of magnitude 6 is likely to occur in 600 years (Isacks and Oliver 1964). It is known
that b-value variations could be used as an indicator of failure both in the rock samples and crustal
earthquakes (Lockner 1993). The variability of b-lines can alter significantly prior to and after an
earthquake. Considerable evidences indicate that earthquakes were preceded by periods of high b-
values (Smith 1981). The variation of parameter b with earthquakes were investigated in five
regions of New Zealand from 1955 to 1979. There was an earthquake in the drop of b-value after
achieving the peak of 1.9 in 1794. Similarly, another earthquake of magnitude 5.7 in 1973 was
preceded by high values of b around 1.6. Also, an earthquake with the magnitude 5.9 in 1971
followed high values of b (Smith 1981). Besides, b-value before a large event tends to decrease
due to the occurrence of foreshocks. A routine procedure to detect the anomaly in b-values with
aseismic region with forthcoming shocks could help predict earthquakes. It is noted that temporal
changes of b-value is more important for evaluating than the absolute peak b-value (Smith 1981).
The seismic potential in the seismic source zones is determined from the Gutenberg-Richter
relationship between the magnitude of earthquakes and the frequency of occurrence because the
curve can be projected to investigate the larger and less frequent earthquakes that have not occurred
yet (Krinitzsky 1993). It has been observed from the frequency-magnitude relations that b-value
changes prior to failure of rock in laboratory studies (Mogi 1963). From laboratory experiments,
it was found to be that Gutenberg-Richter relationships can be applied for microfracturing events
in rock as that for earthquakes (Mogi 1962). Lab tests with acoustic emissions (AE) monitoring
shows that frequency-magnitude distribution of acoustic emission events satisfy power-law
relationships (Guarino, 1998). Along with the occurrence of events, the radiated energy and the
remaining time untill failure follow an inverse power-law relationship (Shcherbakov, 2003).
Shcherbakov and Turcotte (2003) used damage mechanics to analyze the power-law distribution
in radiated energy and the time to failure with increasing pressure arising from the good agreement
between the time to failure and the applied pressure. The b-value is inversely proportional to the
regional stress level (Scholz 1968, Huang and Turcotte 1988).
Motivated by efforts to decrease uncertainties in seismic hazard analysis, significant changes
prior to an earthquake are concerned. Some studies show that b-values decrease with the increasing
13
|
Virginia Tech
|
stress. In addition, the velocity change of wave propagation in a rock mass is considered to be
indicative of failure of the rock mass. It is well known that a decrease in velocity is generally
associated with an attenuation of rock mass and an increased number of fractures (Lockner, Walsh
et al. 1977). However, a few measurements in laboratory studies show that there are still some
exceptions. It is found that attenuation fails to increase at low stress levels, and even decreases
when stresses are able to close the preexisting fractures in a rock mass. Uniaxial tests indicate that
P and S waves attenuate at high loading levels close to the ultimate strength because of increasing
shear stress and vertical cracks are normal to the direction of wave propagation (Lockner, Walsh
et al. 1977). Although there are some studies regarding velocity and attenuation, more studies in
the field are needed to be performed.
Seismic attenuation could be used to predict rock bursts and earthquakes if the same
characteristics with lab tests could be observed (Lockner, Walsh et al. 1977). Uniaxial tests in
laboratories show that P wave velocity increases along with the closing of preexisting cracks and
pores due to the stress in the early loading regime. Then, velocity continues to reduce with the
opening of new fractures as a result of increasing stresses (Masuda, Nishizawa et al. 1990). Major
earthquakes appear to exert influences on stress orientation. Two assumptions are established in
the inversion of earthquake focal mechanisms. First, stress is relatively homogeneous over the
spatial and temporal extent of the events. Second, focal mechanisms are significantly diverse and
can be investigated by displaying P and T axis distributions (Hardebeck and Hauksson 2001). It is
proposed that b-lines follow pronounced power-law progressions when there are considerably
smaller earthquakes (Krinitzsky 1993). According to the relation of Gutenberg and Richter
magnitude and recurrence, probabilistic seismic hazard analysis is conducted based on some
assumptions that are as follows (Krinitzsky 1993):
1. The b-values of large regions can indicate special geological structures including faults and
zones.
2. Earthquakes occur uniformly and randomly in certain spatial and temporal scale.
3. There is no influence between sources.
4. Projected b-lines can be employed to predict potential occurrences of earthquakes through
time.
14
|
Virginia Tech
|
It is concluded that both small earthquakes and large earthquakes contain self-similarities.
Earthquakes of different sizes have similar magnitude-frequency relations and yield similar b-
values. The b-value of aftershocks of Fukui earthquake sequence is about 0.9 and the b-value
calculated from the microearthquakes for ten years is about 1.1. The difference between them is
not significant in the statistical examination (Watanabe 1989). The linearity of occurrences of
earthquakes and magnitude is especially interrupted by large earthquakes (Krinitzsky 1993). In
addition, the distribution of earthquake magnitudes in the Gutenberg-Richter relationship b-line
indicates combinations of individual fault dimensions (Krinitzsky 1993). Seismic hazard analysis
by using b-lines fulfills compelling needs on rock bursts forecasting. Combined with practical
experience in structural engineering, seismic hazard maps are established for seismic hazard
analysis and to alert potential dangers and mitigate hazards (Wesnousky, Scholz et al. 1984,
Wesnousky 1986). Underground observations indicate that seismicity in mining (mine tremors)
obey the same magnitude-frequency relationship as earthquakes. Magnitude-frequency data for
events over 1 year at Harmony Gold Mine has been found to follow the Gutenberg-Richter
relationship (McGarr 1971). Moreover, some work has been done concerning the implications of
b-value change with regard to the stress change of the rock mass in mining. The probabilistic
methodology is applied to estimating the occurrence of earthquakes, despite the shortcomings,
which are:
1. The insufficiency of seismic data.
2. The unreliability of forecasting large earthquakes by using b-lines.
The temporal decay of aftershock activities are found to agree with Omoriโs law from
considerable evidences. Omoriโs law quantifies the decay of seismic activity with elapsed time
after a mainshock triggered at the origin of time (Ouillon, 2005). Omoriโs law is generally
described as
๐๐ ๐พ
๐ = = ( 2-18 )
๐๐ก (๐+๐ก)๐
where r = dN/dt is the rate of occurrence of aftershocks with magnitudes greater than m and t is
the time that has elapsed starting from the mainshock. K, p, and c are involved parameters. Omori
(1894) originally defined the value of p as 1.
15
|
Virginia Tech
|
There is an important scaling law concerning aftershocks in terms of magnitude (Shcherbakov,
Turcotte et al. 2005). Bรฅth (1965) presented that the difference in magnitude between the
mainshock and its largest aftershock keeps constant independent of the magnitude of the
mainshock. Bรฅthโs law is interpreted as
๐ฅ๐ = ๐ โ ๐๐๐๐ฅ ( 2-19 )
๐๐ ๐๐
where m is the magnitude of the mainshock, ๐๐๐๐ฅ is the magnitude of the largest aftershock,
ms ๐๐
the value of ฮm is approximately 1.2. Other studies show that the difference between mainshock
and the largest aftershock is impacted by both magnitude scaling and the aftershock productivity
(Marsan, Prono et al. 2013). Mainshock and the largest aftershock can be statistically estimated by
using Bรฅthโs law and extrapolated Gutnerberg-Richter scaling (Shcherbakov, Turcotte et al. 2005).
Bรฅthโs law is further interpreted in the perspective of energy partitioning. It is found that the
average ratio of the total energy radiated in an aftershock sequence to the energy radiated by the
preceded mainshock is constant. The ratio of the drop in stored elastic energy due to the aftershock
to the drop in stored elastic energy due to the mainshock is the same with the previous ratio, which
is the radiated energy to the total drop in stored elastic energy (Shcherbakov, Turcotte et al. 2005).
The ratio of total radiated energy of aftershocks to the radiated energy of the mainshock is
interpreted as
๐ธ๐๐ = ๐ 10โ3 2โ๐โ ( 2-20 )
๐ธ๐๐ 3 โ๐
2
where E is the total radiated energy of aftershocks, E is the total radiated energy of the
as ms
mainshock, and ฮm* is the magnitude difference between the largest aftershock and the
mainshock.
2.4 Tomography
Tomography was proven to be a greatly useful tool to examine stress distribution in a rock
mass by generating images of its interior using mining-induced seismicity (Young and Maxwell
1992; Westman, 2004). It allows for an examination of stress distribution remotely and
noninvasively. By using underlying framework of earthquake analyses, numerous studies related
to the use of seismic arrays were carried out to improve mining safety (Friedel et al., 1995; Friedel
et al., 1996). Mining-induced seismicity provide sources for imaging the structures of rock masses
in mines. Techniques used in earthquake tomographic studies provide insight into improving safety
16
|
Virginia Tech
|
in underground mines. Tomographic studies are of great importance for describing the earthโs
structure. Earthquake location and tomography are widely used tools to infer active and passive
structures of the earthโs interior. The features of the inside structure of Earth are analyzed by
imaging seismic velocities and earthquake locations (Monteiller, 2005). It is known that high
resolution imaging of earthquake tomography needs both available seismological data and
appropriate inversion scheme. First, a simple smooth model is established to interpret weighted
average of the observations. Then, this initial velocity model keeps being modified until a
reasonable range of error estimation between observed and predicted values is achieved (Kissling,
Ellsworth et al. 1994). A minimum 1-D velocity model as an initial reference model for the 3-D
local earthquake tomography is established. The 1-D velocity model needs to be tested for
assessing the quality of the model. In the tests of velocity inversion, initial models are assigned
with average velocities, which are greatly higher or lower than the minimum 1-D model. Also,
event locations are perturbed randomly on all three spatial directions (shifted in x, y, and z) before
using as input in the 1-D model. Velocity change in a 1-D model is mainly determined by lateral
variations and local geology in the shallow subsurface. The tests conducted by Haslinger (1999)
show that the velocity model is not well constrained in the shallow layer (0 km - 0.5 km) and below
30 km. The ratio of V /V is approximately 1.85 from 0.5 km to 20 km. The P, S velocity, and the
p s
ratio of V /V are arbitrary on the top layer (0 โ 0.5 km) and deep depth (> 30 km).
p s
3-D inversions of the travel time data set with the same layer velocities as the minimum 1-D
velocity model are performed and show fewer artifacts than a priori 1-D model (Kissling,
Ellsworth et al. 1994). A 1-D velocity model with corresponding stations and travel times is used
to relocate events by using a trial and error procedure at the first step. Selected data including high-
quality events are used in 3-D tomography inversion (Haslinger, 1999). In tomography inversion,
a 3-D grid is established for providing a good coverage on raypaths. The velocity along a raypath
and the velocity partial derivatives are computed by linear interpolation on the surrounding grid
points (Eberhart-Phillips, 1986). It is shown that the velocity value at each point is usually
calculated from the velocity values at the surrounding eight nodes by using tri-linear interpolation
(Thurber, 1999). To ensure enough information for calculating the velocity of the grid point, the
spacing between the grids is arranged to include abundant raypaths around each grid point. It is
not necessary that all the spacing are uniform (Eberhart-Phillips, 1986). For example, both a
17
|
Virginia Tech
|
horizontal grid and a vertical grid with 10 ร 10 km node spacing cover an area of 100 ร 100 km,
and the depth ranges from the surface to 40 km depth (Haslinger, 1999).
The damped least-squares method is a commonly used velocity inversion technique.
Parameters including damping have to be set up before inversions. Damping is added to the
diagonal elements of the separated medium matrix by trying to suppress significant model changes,
which usually occur for singular values near zero. Optimal damping value can be determined by
the trade-off between model variance and data variance on multiple tests (Eberhart-Phillips, 1986).
It is a traditional approach that the ratio of data variance to model variance is picked as the damping
value. However, the value is likely to be unreasonably small. The oscillation of velocity from one
grid point to another is large if the damping is small. Consequently, the damping value is
empirically picked for the optimal iteration. Multiple iterations with different damping values are
performed with decreasing both the data variance and solution variance. As mentioned by
Haslinger (1999), an appropriate damping value contributes to a pronounced weakening in data
variance and a moderate growth in model roughness. The damping and smoothing are two
regularization parameters. The โL-curveโ trade-off analysis of data variance versus model variance
are used to select the optimal parameters when damping is applied. Zhang (2007) provided an
example that how the trade-off curves of data variance and model variances were used for different
damping and smoothing weight values.
In Eberhart-Phillipsโs (1986) study, the modeled area was designed not to cover all the
earthquakes and stations. The events and stations outside the modeled area contribute to improve
the raypaths distribution in the modeled area. It is known that the density of raypath coverage
changes significantly over the modeled area. High density of raypath coverage provides necessary
information for more detailed velocity model. The principle to arrange the locations of grid points
is that the reasonable resolution can be generated at most grid points from distribution of stations
and hypocenters. It is known that the high and low velocity regions keep a very similar distribution
and slightly changed amplitudes of the velocity anomalies even if the arrangement of gird points
is different. According to this feature, the range of velocity anomalies can be estimated by using
different arrangements of gird points. It is noted that high resolution regions of tomography needs
the smaller grid spacing for velocity inversions (Eberhart-Phillips, 1986). A starting model with
initial velocities is established for the initial inversion. Tests using different initial velocities can
be performed to select the optimal velocity values. Some indexes are used to assess how the
18
|
Virginia Tech
|
velocity is constrained and solved at each grid point. The hit count is a simple and direct estimate
of the effectiveness of the model. It is the total number of raypaths that are involved at one grid
point in the solving process. The relative amount of raypaths traveling around the grid points is
another important index, which can be indicated by derivative weight sum (DWS). The DWS is
expressed as
๐๐
๐ท๐๐ = โ๐ โ๐ฟ ๐๐ ( 2-21 )
๐ ๐=1 ๐=1
๐๐
๐
where N is the number of events; Travel time T is from an earthquake i to a seismic station j; m
l
๐๐
are the L parameters of the velocity model. The model partial derivatives ๐๐ are line integrals
๐๐
๐
๐๐
along the raypath. The influence of model parameters m on the travel time T is indicated by ๐๐.
l ij
๐๐
๐
The DWS quantifies the relative raypath density in the volume of a grid point and weights the
significance of each raypath by its distance to the grid point (Haslinger, 1999). Only the inversion
grid nodes whose derivative weight sum (DWS) values are greater than five are used in the
tomographic analysis. DWS can be used to assess the model resolution of inversion, especially for
large inverse problems (Zhang, 2007). Another important index for measuring the inversion
quality is the resolution matrix, which assesses the dependency of the solution for one model
parameter on all the other model parameters. The density of raypath or the values of the resolution
of the grid space can be enlarged by decreasing the damping value for iteration. However, the
velocities of grid points are likely to be unreasonable due to the large oscillations in velocity arising
from the small damping value. The general approach to choosing optimal damping value is to
select the damping value generating both moderate data variance and solution variance. Moreover,
accuracy fails to be improved if large values of resolution are generated in the practical grid space
having low density raypath (Eberhart-Phillips, 1986). Tomographic studies can merely
demonstrate the features of the regions having a larger area than the grid spacing. Despite the fact
that velocity variations are generated in the grid spacing, velocities at gridpoints reflect the average
velocity distribution within the surrounding volume. Reasonable assessments on velocity
anomalies can be depicted by the size and shape of tomographic distribution. However, it is not a
rigorous and exact estimate of boundaries of velocity features based on the size and shape of
velocity anomalies in tomographic studies. The reliability of tomographic inversion methods and
the quality of the images can be assessed by using a synthetic velocity model, such as the checker-
19
|
Virginia Tech
|
board test (Lรฉvฤque, 1993). Numerical tests prove that it is necessary to arrange source-receiver
geometries for building a checkerboard test (Monteiller, 2005). A checkerboard test is usually
applied with perturbation in the velocity model to compute theoretical travel time with the real
source-receivers geometry. Studies of checkerboard tests indicate that the spatial extent of valid
tomographic reconstruction is mainly controlled by ray distribution. An example of a checkerboard
test for double difference tomography is presented by Monteiller (2005). A 200m/s sinusoidal
perturbation is deployed in the velocity model, which is iterated in girds of 1 km. The tests show
that the model is estimated in a reasonable volume with a correct resolution of the tomographic
inversion (Monteiller, 2005).
It is believed that the velocity of waveforms propagation is uniform along the straight line
raypaths from sources to receivers due to the assumption of the uniform medium. Various
algorithms are applied in ray tracing techniques to estimate the raypath. A fast two-point ray
tracing algorithm is developed for ray approximation by minimizing the travel time (Um, 1987).
It tries to achieve a balance on the computation time and the accuracy of travel times. Iterations on
perturbations are performed to find the raypath with the minimal travel time based on an initial
guess of the raypath. Another enhancement using the simplex search method is performed on the
fast two-point ray tracing algorithm by distorting the raypath from a starting path in a systematic
way untill obtaining the minimized travel time. Simplex method applied in raypath search can
decrease the required amount of travel time (Prothero, 1988). The inversion of tomographic
studies is usually linearized even for non-linear seismic tomography problems. The final iteration
provides the model resolution matrix to summarize the quality of inversion. Synthetic tests can be
used to estimate the model resolution and uncertainty by performing velocity inversion on a
synthetic data set, which has the same distributions with the real data (Zhang, 2007). Inversion of
synthetic data can indicate the quality of an inversion scheme. Synthetic models are usually
combined with velocity anomalies, which are generated from the real data. At the same time,
synthetic travel times are computed based on the real distribution of sources and receivers. The
synthetic is are computed with the same parameters used in real data.
In seismic tomography research, it is believed that local earthquake tomography (LET) can
provide a higher-resolution imaging of velocity structures than teleseismic tomography (Thurber,
1999). Meanwhile, the shortcomings of local earthquake tomography is the high variability of
model sampling arising from the non-uniform source distribution of seismic events. Double โ
20
|
Virginia Tech
|
difference tomography algorithms are the basis of some form of the linearization of the travel time
equation in a first order Taylor series that relates the difference between the observed and predicted
travel time to unknown adjustments in the hypocentral coordinates through the partial derivatives
of travel time with respect to the unknowns (Waldhauser 2001). The DD technique is based on the
fact that if the hypocentral separation between two seismic events is minute compared to the event-
station distance and the scale length of velocity heterogeneity, then the ray paths between the
source region and a common station are similar along almost the entire ray path. In this case, the
difference in travel times for two events observed at one station can be attributed to the spatial
offset between the events with high accuracy.
Double-difference equations are built by differencing Geigerโs equation for earthquake
location. In this way, the residual between observed and calculated travel-time difference (or
double-difference) between two events at a common station are related to adjustments in the
relative position of the hypocenters and origin times through the partial derivatives of the travel
times for each event with respect to the unknown. HypoDD calculates travel times in a layered
velocity model (where velocity is assumed depending only on depth) for the current hypocenters
at the station where the phase was recorded.
Double โ Difference tomography is first presented by Waldhauser and Ellsworth (Waldhauser
and Ellsworth 2000). It is an efficient approach to determine hypocenter locations over large
distances because it can incorporate ordinary absolute travel โ time measurements and cross โ
correlation P and S wave differential travel โ time measurements. The double-difference
tomography arrival time T from an earthquake sourcer i to a receiver k is written using ray theory
as a path integral,
T๐ = ๐ +โซ๐ ๐ข๐๐ ( 2-22 )
๐ ๐ ๐
where ๐ฝ is the origin time of event i, u is the slowness field, and ds is an element of path length.
i
The source is defined as the centroid of the hypocenters. Residual between observed and
calculated differential travel time between the two events is defined as
d๐๐๐ = (๐ก๐ โ๐ก๐ )๐๐๐ โ(๐ก๐ โ๐ก๐ )๐๐๐ ( 2-23 )
๐ ๐ ๐ ๐ ๐
On the basis of previous approach using waveform cross-correlation, Haijiang Zhang and
Clifford H. Thurber developed a double-difference seismic tomography method (tomoDD)
21
|
Virginia Tech
|
employing both absolute and relative arrival time (Zhang and Thurber 2003). The relative arrival
times are directly used to determine relative event locations and absolute arrival time picks are
employed to minimize differences among relative arrival times.
Due to the fact that event locations and the velocity model affect each other, simultaneously
solving on event locations and velocity structure is suggested by Haijiang Zhang (2007). The
tomoDD (double difference tomography code) is designed to jointly solve the velocity structure
and seismic locations. Although jointly solving increase parameters for velocity inversion and
iteration, optimal parameters can be selected by test and trade-off analysis. It is an important goal
of seismic tomography to improve the estimates of the model parameters including velocity
structure and locations of seismic events. In order to minimize the root mean square (RMS) misfit,
perturbations on model parameters are solved by iterations (Thurber, 1999). Zhang and Thurber
(2007) compared the velocity inversion with and without smoothing constraints to the slowness
perturbations. It is shown that the iteration system with smoothing constraint is better conditioned
and more stable. In the same manner, the trade-off analysis on damping value is carried out. The
event locations and velocity model can be solved using the LSQR or SVD method. The LSQR
method is better than the SVD method for applying large sensitivity matrices since it is usually
much faster than the SVD method.
The velocity in the shallow layers of the crust is significantly affected by physical factors,
including composition, fluids, saturation, temperature and ambient pressure. The analysis of
seismic wave behavior can reveal the influence of physical conditions. Crack density and
saturation ratio of P-wave and S-wave can be computed from inversion of P-wave and S-wave
arrival times. The high resolution seismic tomography provides the information of crack density
and saturation ratio in specific regions. The ratio between P-wave and S-wave identifies crack
density and saturation ratio. The S-wave is useful for inferring fluid accumulation in the rock mass
because the velocity of S-wave depends on density and rigidity of the rock and cannot pass through
liquids. Crack density and saturation ratio of rocks can be discriminated according to the same P-
wave velocity and the different S-wave velocity (Serrano, 2013). High porosity (ฮจ) areas usually
have the same distribution as high crack density (ฦ) areas. The porosity ฮจ has been defined as the
product of P-wave and S-wave,
ฮจ = V รV ( 2-24 )
P S
22
|
Virginia Tech
|
ln ฮจ = ln V / ln V ( 2-25 )
P S
d ฮจ/ ฮจ = d V / V + d V / V ( 2-26 )
P P S S
where d ฮจ/ ฮจ is the change in porosity; d V / V and d V / V are perturbed values of V and V .
P P S S P S
Also, the relation between ratio of V / V and Poissonโs ratio is
P S
(๐๐)2 =2(1โ๐)(1โ2๐) ( 2-27 )
๐๐
According to these relations, it is inferred that areas with active aftershock activities and low-
Poissonโs rations are more likely to be brittle part of the fault zones. Similarly, the hypocenters
region of the mainshock are found to be with high-Poissonโs ratio and low velocities of P-wave
and S-wave (Zhao, 1998).
2.5 Summary
By showing the usefulness of earthquake data on assessing the seismic risk in the earthโs
interior, the studies provide insight into the evaluation of the seismic risk from mining-induced
seismicity in deep underground mines. Such information is important to engineering applications
in underground mines. In addition to the applications in natural earthquakes, some studies indicate
that tomography is useful to detect rock burst prone zones by examining the stress distribution.
However, these studies claim the potential seismic risk merely on location of groups of
microseismicity and the stress distribution. In order to explore the application of double-difference
tomography for predicting the potential seismic risk in mines, historical seismicity sequences
including major events and microseismicty from rock bursts need to be analyzed using double-
difference tomography.
The stress distribution response to the occurrence of major events gives mining engineers an
understanding of conditions of potential rock bursts. Owing to an apparent lack of tomographic
studies on changes of the stress distribution associated with mining-induced major events in
underground mines, it is necessary to accurately examine the stress distribution surrounding the
major events. Empirical verifications by double-difference tomographic studies on seismic
sequences in underground mines are useful to provide professional judgment on the stress
distribution and the potential seismic risks.
23
|
Virginia Tech
|
Hodgson, K., 1967. The behaviour of the failed zone ahead of a face, as indicated by continuous
seismic and convergence measurements. COM Res. Rep 31: 61.
Hodgson, K. and N. Joughin, 1966. The Relationship between Energy Release Rate Damage and
Seismicity in Deep Mines. The 8th US Symposium on Rock Mechanics, American Rock
Mechanics Association.
Huang, J. and D. Turcotte, 1988. Fractal distributions of stress and strength and variations of b-
value. Earth and planetary science letters 91(1): 223-230.
Isacks, B. and J. Oliver, 1964. Seismic waves with frequencies from 1 to 100 cycles per second
recorded in a deep mine in northern New Jersey. Bulletin of the Seismological Society of
America 54(6A): 1941-1979.
Ishida, T., et al., 2010. Source distribution of acoustic emissions during an in-situ direct shear test:
Implications for an analog model of seismogenic faulting in an inhomogeneous rock mass.
Engineering Geology 110(3): 66-76.
Kagan, Y. Y., 1994. Observational evidence for earthquakes as a nonlinear dynamic process.
Physica D: Nonlinear Phenomena 77(1): 160-192.
Kanamori, H., 1977. The energy release in great earthquakes. Journal of Geophysical Research
82(20): 2981-2987.
Kissling, E., et al., 1994. Initial reference models in local earthquake tomography. Journal of
Geophysical Research: Solid Earth (1978โ2012) 99(B10): 19635-19646.
Krinitzsky, E. L., 1993. Earthquake probability in engineeringโPart 1: The use and misuse of
expert opinion. The Third Richard H. Jahns Distinguished Lecture in engineering geology.
Engineering Geology 33(4): 257-288.
Krinitzsky, E. L., 1993. Earthquake probability in engineeringโPart 2: Earthquake recurrence and
limitations of Gutenberg-Richter b-values for the engineering of critical structures: The
third Richard H. Jahns distinguished lecture in engineering geology. Engineering Geology
36(1): 1-52.
25
|
Virginia Tech
|
Kusznir, N., et al., 1980. Mining induced seismicity in the North Staffordshire coalfield, England.
International Journal of Rock Mechanics and Mining Sciences & Geomechanics Abstracts,
17, 45-55.
Lockner, D., 1993. The role of acoustic emission in the study of rock fracture. International Journal
of Rock Mechanics and Mining Sciences & Geomechanics Abstracts, 30(7): 883-899.
Lockner, D., et al., 1977. Changes in seismic velocity and attenuation during deformation of
granite. Journal of Geophysical Research 82(33): 5374-5378.
Marsan, D., et al., 2013. Monitoring aseismic forcing in fault zones using earthquake time series.
Bulletin of the Seismological Society of America 103(1): 169-179.
Masuda, K., et al., 1990. Positive feedback fracture process induced by nonuniform highโpressure
water flow in dilatant granite. Journal of Geophysical Research: Solid Earth (1978โ2012)
95(B13): 21583-21592.
McGarr, A., 1971. Violent deformation of rock near deep-level, tabular excavationsโseismic
events. Bulletin of the Seismological Society of America 61(5): 1453-1466.
McGarr, A., 1976. Seismic moments and volume changes. Journal of Geophysical Research 81(8):
1487-1494.
McGarr, A. and R. Green, 1975. Measurement of Tilt in a DeepโLevel Gold Mine and its
Relationship to Mining and Seismicity. Geophysical Journal International 43(2): 327-345.
Mogi, K., 1962. Study of elastic shocks caused by the fracture of heterogeneous materials and its
relations to earthquake phenomena. Bulletin of the Earthquake Research Institute 40, 125-
147.
Mogi, K., 1963. Magnitude-frequency relation for elastic shocks accompanying fractures of
various materials and some related problems in earthquakes. Bulletin of the Earthquake
Research Institute 40, 831-853.
Richardson, E. and T. H. Jordan, 2002. Seismicity in deep gold mines of South Africa: Implications
for tectonic earthquakes. Bulletin of the Seismological Society of America 92(5): 1766-
1782.
Richter, C. F., 1958. Elementary seismology. W. H. Freeman and Co., San Francisco.
26
|
Virginia Tech
|
3 Seismic velocity change due to large magnitude events in deep
underground mines
3.1 Abstract
Passive seismic monitoring was used to measure wave propagation in deep rock masses. About
30,000 mining โ induced seismic events in two hard rock mines were recorded and investigated.
The two mines were the Creighton Mine and the Kidd Mine. The accurate travel times of source-
receiver pairs allowed us to demonstrate temporal change in velocity. It was observed that the
average P wave velocity was about 6000 m/s in the mining regions of Creighton Mine and Kidd
Mine. The coseismic and postseismic velocity changes were examined in crustal earthquakes.
Velocity investigations on three months of data around large events in deep hard rock mines show
a significant decrease in the average seismic wave velocity after the large events. The sudden drop
in average velocity change after large events implies the creation of new fractures in a rock mass
due to large events. Additionally, it was observed that the average velocity returned to a normal
level after several days. A reasonable explanation for the postseismic velocity increase is that
opened cracks began to heal due to crack close.
3.2 Introduction
Large magnitude events are defined as mining-induced seismic events with a moment
magnitude greater than 0. It is also suggested that the moment magnitude of rockbursts usually
ranges from 1.5 to 4.5, and low magnitude microseismic events are with a moment magnitude less
than 0 (Young, Maxwell et al. 1992). Large magnitude events studies in underground mining were
driven by a desire to predict rock bursts and mine failures (Ouillon and Sornette 2000). By
investigating the average velocity of P-wave traveling through rock mass during and after large
magnitude events occurred, it is possible to evaluate the stability of rock masses. Velocity changes
in natural earthquake studies have provided an important framework for how the large seismic
events affect the velocities. Previous studies attempted to establish a functional relationship
between velocity and stress changes in shallow crust (Nun 1971).
The effects of stress on seismic velocities in dry rock at temperatures and pressures have been
investigated extensively and the mechanism for attenuation is interpreted in different ways
(Lockner, Walsh et al. 1977). Significant progress has been made in the study of velocity change
caused by large magnitude events in seismology (Schaff and Beroza 2004). However, not much
29
|
Virginia Tech
|
work has been performed regarding the measurements of velocity change influenced by large
magnitude events caused by underground mining. More studies of how mining-induced large
magnitude events influence velocities around the rock mass of mining are needed. A considerably
stratified rock mass containing abundant layers is regarded as a homogeneous and transversely
isotropic continuum if the dimension of mining excavation is sufficiently large (Salamon 1968). It
is feasible to regard layers of rock mass as homogeneous and transversely isotropic due to the
stratum structure. The highly stratified rock mass can be regarded as an equivalent medium
exposed to homogeneous stress distribution under large excavations (Salamon 1968). It is found
that large magnitude events could likely be responsible for velocity change with stress
redistribution of the rock mass.
The purpose of this paper is to evaluate the effects of large magnitude events on the variation
of P-wave velocities. In addition, the importance of the observation of velocity change caused by
large magnitude events in mining is discussed. In order to interpret the effects of large magnitude
events in terms of seismic P-wave velocities, we present two cases studying velocity change before
and after large magnitude events in two hard rock underground mines.
3.3 Velocity and seismicity investigation
In crustal earthquakes, microearthquakes are used to study the effects of large earthquakes
(Schaff and Beroza 2004). The microearthquakes are detected and registered by a seismic network.
Coseismic and postseismic velocity changes were found in the work by Schaff and Beroza (2004).
It was found that a large approximately 2.5% coseismic velocity decrease was recorded after the
1989 Loma Prieta earthquakes. The velocity subsequently returned to its original value.
Also, it was claimed that velocity increases prior to a crustal earthquake, which reduces the
seismic wave speeds and subsequently recovers to original level (Vidale and Li 2003). The velocity
change of the P-wave and S-wave before and after the crustal earthquake were investigated. It is
inferred that the rupture arising from the earthquake impedes wave propagations, and seismic wave
speedโs recovery is consistent with healing caused by closure of cracks (Vidale and Li 2003).
Mining-induced seismicity arises in the vicinity of excavations in rock masses, which are
originally in a state of static balance. Mining activities, including extractions, blasts and injections,
are the main triggers of seismicity. A rock mass generates seismic waves as the original state of
stress is disturbed by human activities such as excavation and production blasts (Cook 1976).
30
|
Virginia Tech
|
Mining-induced seismicity is the source of wave propagation, which is governed by the
characteristics of media through which waves travel. Increasing mining rate is a signature of
fractures developing and coalescing. Mining-induced seismicity is triggered more frequently as
more ruptures develop through the rock mass (Iannacchione, Esterhuizen et al. 2005). In addition,
geological structures can be inferred by seismicity, which tend to occur along shear faults due to
high shear wave energy (Christensen and Mooney 1995).
It is known that an increase in the level of seismicity occurs prior to a large magnitude event
(Nun 1971). The noticeable pattern of microseismicity before, during and following the large
magnitude events reflects the typical seismic energy accumulation. In underground mining, a
direct observation of seismic events is performed by the seismic stations network. Seismic
waveforms are recorded by microseismic monitoring systems. In addition to locating seismic
events, the travel times from hypocenters of seismic events to seismic stations are estimated and
analyzed. Velocity fitting is then performed on travel distance and travel times to get the average
velocity of all the wave raypaths. It is postulated that, after the occurrence of large magnitude
events, the seismic velocity of the region around the earthquakes will decrease.
3.4 Velocity fit using mining-induced seismicity
The P-wave raypaths vary due to the lateral heterogeneity of rock mass (Panza, Romanelli et
al. 2001). It has been proven that raypaths are approximate circular paths of varying radii of
curvature in crustal structure (Thurber 1983). Factors including sedimentary geologic structures
and fluid mediums affect the propagation of raypaths. Due to the absence of fluid within the zone
of underground mining, the raypaths can be assumed to be straight lines within the scale of mine
sites. Figure 3.1 illustrates the arrival times of one single event recorded by multiple stations.
According to the arrival times of the waveforms from seismic sources and the distance between
sources and stations, the linear fitting line in Figure 3.1 represents the arrangement of arrival timeโ
distance pairs of the same event very well. It appears that the records of seismic events are reliable
because the arrival time changes linearly and positively in correlation with the distance between
the same event and different stations.
31
|
Virginia Tech
|
Figure 3.1. Linear fitting for single event of Creighton Mine; showing arrival time of seismic energy
versus the distance between the event and sensor locations.
The microseismic events were divided into groups of equal time duration (two weeks).
Average velocity of each group was examined by velocity linear fit. Velocity change provides us
with key information on the stability of rock masses in underground mining. The propagation
velocity of seismic waves is governed by density, fractures, and geologic structures of the rock
mass. In addition, tremors during large magnitude events would cause changes in seismic velocity
(Wegler, Nakahara et al. 2009). As a consequence of stress variation, seismic velocity change
reflects the stress change in space and time.
Temporal change in seismic velocity in underground mines showed that the sudden drop of
velocity agreed with the irregular seismic activities, including large magnitude events and highly
active microseismicity. Fracture development triggered during and after large magnitude events
could weaken the subsequent wave propagation. There is thus a coseismic decrease in velocity
with a large magnitude event. The velocity generally stays at a lower level for a duration of time
after the large magnitude event due to the fractures around the rupture zone. In natural earthquake
studies, some measurements suggested that a velocity drop of 5% preceded M = 4 earthquakes
(Lukk and Nersesov 1978). The velocity finally recovered to normal levels due to crack healing
effects (final closure of the cracks normal to the stress) after a time duration following the large
magnitude event.
3.5 Case Study
Data sets of mining-induced seismicity recorded in two hard-rock underground mines,
Creighton Mine and Kidd Mine, were analyzed. The greatest depth of excavation is 2440 m in
Creighton Mine and 1500 m in Kidd Mine, respectively. Mining-induced seismicity were well
32
|
Virginia Tech
|
located by their seismic networks. A total of about 30000 seismic events, nearly 25000 in
Creighton Mine and about 5000 in Kidd mine, were investigated in the velocity change study.
Creighton Mine
Seismicity data from Creighton Mine, an underground nickel mine in Sudbury, Canada, were
recorded from May 25, 2011 to September 25, 2011. The mining methods used have varied
significantly over the course of its mine life. Since 2008, the large-diameter blast holes method
associated with vertical retreat mining has been adopted at the Creighton Mine. The depths of main
production area ranged between 1829 m (6000 ft) to 2438 m (8000 ft) (Malek, Espley et al. 2008).
The microseismic monitoring system consists of 62 stations, including 10 triaxial and 52 uniaxial
accelerometers. The depth of stations were arranged from 1508 m (4947 ft) to 2392 m (7847 ft) to
provide a good coverage on the rock masses adjacent to main production areas.
The microseismic monitoring system provided real-time seismic monitoring and recorded the
location and trigger time of each event. As shown in Table 3.1, Creighton Mine experienced four
large magnitude events, which are seismic events with a moment magnitude larger than 1.0, at a
depth of approximately 2500 m (8200 ft) in July, 2011.
Table 3.1. Times, Locations, and Magnitude of Major Events in Creighton Mine
Major Events Date Time North (m) East (m) Depth (m) Magnitude
1 July, 6th 2011 8:41 AM 1927 1399 2332 3.1
2 July, 6th 2011 8:46 AM 1914 1478 2333 1.2
3 July, 6th 2011 8:47 AM 1879 1456 2284 1.3
4 July, 10th 2011 2:44 AM 1853 1385 2392 1.4
According to the seismic records, microseismicity was remarkably active between the large
magnitude events and post large magnitude events period. Additionally, the common characteristic
of these large magnitude events was that they all followed shortly after a production blast. An 1134
kg production blast preceded the initial three large magnitude events. Similarly, the last large
magnitude event was triggered by a production blast performed one minute prior to it.
Based on the premise of the linear relationship between arrival time and source-receiver
distances, the velocity of wave propagation on each raypath could be calculated by the travel
distance and travel time on the raypath (Figure 3.1). It was observed that the distribution of travel
time-distance pairs agrees with a linear relationship (Figure 3.2). The average velocity of the region
was computed by velocity fitting using all the raypaths traveling through the target area.
33
|
Virginia Tech
|
Figure 3.2. Linear fitting of travel time with travel distance
Using all of the raypaths propagating through the rock mass, the arrangement of travel time-
distance pairs from sources to stations follows a line (Figure 3.2), the slope of which is the average
velocity of all raypaths propagating through the rock mass. Note that the fitting line crosses the
origin as a consequence of there being no travel distance if the travel time is zero. As can be seen
in Figure 3.2, there is a linearly distributed swarm of scatter besides the linear dominating swarm
fitted by the regression line. It implies that waveforms including S-waves (scatters with low
velocity) are combined with P-wave in the data set. The robust fit method was used to perform the
velocity linear fitting. According to the velocity fit line, the travel times corresponding to velocities
larger than 9000 m/s or smaller than 3000 m/s are removed in the velocity fit analysis. Robust fit
method is used to lessen the effects of outliers (Dumouchel and O'Brien 1991; Cleveland 1979).
Error analyses on different regression methods were performed, indicating that residuals of the
robust fitting were lower than that of the ordinary fitting (Figure 3.3). Besides, the fluctuations of
the robust residual were much smaller. According to the linear regression analysis, the robust
fitting provided an accurate velocity fit for the seismic waves.
34
|
Virginia Tech
|
Figure 3.3. Residuals of robust linear fitting and ordinary linear fitting for Creighton Mine
The average velocity fit results indicate that the velocity within the near-mining rock mass was
influenced by large magnitude events. The average velocity in the period during the large
magnitude events was the lowest of all the time periods. A steady increase in the average velocity
was found from data sets before the occurrence of large magnitude events (Figure 3.4). As a
consequence of large magnitude events, the velocity dropped from 6049 m/s during June, 13th -
June, 26th to approximately 6017 m/s during the period from June, 27th to July, 10th. Because large
magnitude events caused fractures to develop and collapse, wave transmissions in the surrounding
rock mass attenuated due to the developed fractures. A few days later, the velocity increased to the
initial level of 6050 m/s. This was a likely result of the fractures in the rock mass closing due to
the crack healing effect. The increased damage to the rock mass likely results in the velocity
reduction after the large magnitude events (Bieniawski 1970).
35
|
Virginia Tech
|
Large Magnitude
Events 1, 2, 3, and 4
Figure 3.4. Velocity change affected by large magnitude events in Creighton Mine
Kidd Mine
Kidd Mine, situated in near Sudbury, Ontario, Canada, was investigated to see how large
magnitude events affected seismic velocities. Kidd Mine is the deepest copper and zinc mine in
the world. There was a high level of seismicity due to the stresses experienced at such a depth
(Guha 2000). The three-dimensional seismic network, at Kidd Mine, contains 23 sensors.
Two large magnitude events occurred in 2009. The earlier one occurred on January 6th and the
other occurred on June 15th. The same method used for Creighton Mine was applied in analyzing
the velocity change of wave propagation affected by large magnitude events in the rock mass.
Since the origin time of the large magnitude events are five months apart, the velocity changes
around large magnitude events were analyzed individually.
Table 3.2. Times, Locations, and Magnitude of Large Major Events in Kidd Mine
Large Magnitude Events Date Time North (m) East (m) Depth (m) Magnitude
1 January, 6th 2009 4:40 AM 65733 65686 1150 3.8
2 June, 15th 2009 7:01 PM 65861 65737 1035 3.1
Properties of the two large magnitude events are listed in Table 3.2. The microseismicity data
were divided into two week groups. The microseismicity within each two week group was
analyzed and compared with the other time periods.
36
|
Virginia Tech
|
For the velocity fitting of the microseismicity from Kidd Mine, it was observed that the robust
fitting was superior to the ordinary linear fitting for these data sets. As shown in Figure 3.5, the
residual of the robust fitting is smaller and fluctuates less than that of the ordinary fitting.
Figure 3.5. Residuals of robust linear fitting and ordinary linear fitting for Kidd Mine
Regarding the large magnitude event of January 6th, 2009, the average velocity from January
5th to January 18th, 2009 experienced a significant drop in comparison to the subsequent two
weeks after this drop (Figure 3.6). As expected, the velocity decreased significantly after the large
magnitude event and then returned to the initial level. It was postulated that the fractures actively
developed around the large magnitude event and mining-induced microseismicity weakened the
wave propagations. There was a noticeable phenomenon involving velocity that was shown at
Creighton Mine as well. The average velocity decreased again after first recovering due to the loss
of structural stability in fracturing of the rock mass even though it was still higher than the lowest
velocity historically experienced.
37
|
Virginia Tech
|
Large Magnitude
Event 2
Figure 3.7. Velocity change affected by the second large magnitude event in Kidd Mine
Some differences were manifested between the velocity changes from these two large
magnitude events in Kidd Mine. For the first large magnitude event, the velocity decreased to 5923
m/s after the first large magnitude event period and then returned to background level at 6100 m/s.
The magnitude of velocity change of the first large magnitude event was smaller than that of the
second large magnitude event. The velocity change before the first large magnitude event was
within 100m/s. However, the magnitude of velocity fluctuations after the first large magnitude
event were about 130 m/s. The possible explanation is that the structure of rock mass is less stable
after the fractures induced by large magnitude events.
3.6 Summary and discussion
Velocity changes have been examined to show the response to large magnitude events in
underground mines. The robust linear fitting method is effective for computing the average
velocity from all raypaths because coefficients of every observation are estimated by a weighting
function to reduce residuals. During the periods including large magnitude events, the magnitude
of average velocity reduction for P waves is 0.6% for Creighton Mine and 1.8% for Kidd Mine.
Similar results from crustal earthquakes are claimed that the path-average velocity changes are
1.5% for P waves. Analyses on S waves of mining-induced seismicity are needed in future studies
to compare with a coseismic velocity decrease as much as 3.5% for S waves in crustal earthquakes.
39
|
Virginia Tech
|
A significant decrease for S-waves indicates shear failures in the rock masses, providing insight
into potential fault-slip risks.
The primary advantage of continuous velocity measurements is that the mining-induced
seismicity can be used to consistently assess the potential seismic risk in rock mass covered by
seismic networks in underground mines. Once general velocity anomalies are observed, some
portions of the regions experience significant stress change including stress concentration and
stress relaxation. One limitation of this analysis is the use of seismic events which are distant from
objective regions in underground mines. The raypaths from these events give more information
with respect to the stress distribution along the raypaths. However, part of the information cannot
be used for the stress distribution analysis because some portions of the raypaths are beyond
objective regions. Thus, trade-offs on using events beyond objective regions could be made for
giving rise to optimum results.
The velocity tends to increase to the original level from a duration of recovery after the
occurrences of large magnitude events. The velocity changes suggest that velocity reduces because
the fractures, caused by large magnitude events and microseismicity, slow the wave propagation
in the rock mass. The fact that the velocity eventually returns to the previous state implies a
recovering effect due to the closure of the cracks (crack healing effect). An increase in velocity
can be attributed to the fact that bulk modulus increases, possibly arising from the closure of
cracks. It is noted that the velocity fluctuates by a significant amount after large magnitude events
because the induced fractures reduce the stability in a rock mass.
40
|
Virginia Tech
|
4 Imaging of Temporal Stress Redistribution due to Triggered Seismicity at
a Deep Nickel Mine
4.1 Abstract
Imaging results from passive seismic tomography surveys in a deep nickel mine showed that
major events (moment magnitude > 1.0) are associated with the state of stress in the surrounding
rock mass. Travel time picks from monitored seismicity at the Creighton Mine in Sudbury,
Ontario, Canada between June 22nd and July 24th, 2011 were used to generate tomograms. During
this time period, four major events were observed associated with 13630 microseismic events.
Two different grid spacings were designed for double-difference tomographic inversion. A large-
scale velocity model was used to examine the general trend of velocity distribution before and
after the major events. A small scale velocity model with finer grid spacings gave rise to a higher
resolution result. By comparing the results obtained from tomographic inversion on consistent time
frames, it was found that the velocity in the region near major events started increasing before the
occurrence of major events. High-velocity anomalies tended to expand in the region adjacent to
the major events. In addition, the accumulating number of seismic events was computed to show
correlations with major events. It was shown that the seismic rate increased dramatically during
the time period when the major events occurred.
4.2 Introduction
Ground fall is the leading cause of injuries in underground mines. Ground control issues
accounted for over 40% of fatal accidents at underground metal mines in the USA between 2008
and 2012 (MSHA report, 2013). Reliable ground control is an essential consideration in mine
safety. In order to detect potential seismic hazard, it is necessary to assess the seismic potential
that might lead to failures in rock masses. Specifically, seismic hazard analyses require
identification of anomalously high stress associated with major events in mines. It is concluded
that microseismicity distributes along significant fractures in the mine sites and stress inversions
from seismicity are consistent with in-situ measurement of stresses (Urbancic, Trifu et al. 1993).
Seismic tomography has been widely applied for imaging and mapping Earthโs sub-surface
characteristics (Vanorio et al., 2005). Further, tomography was proven to be a greatly useful tool
to examine stress distribution in a rock mass by generating images of its interior using mining-
induced seismicity (Young and Maxwell 1992; Westman, 2004). It allows for an examination of
43
|
Virginia Tech
|
stress distribution remotely and noninvasively. By using underlying framework of earthquake
analyses, numerous studies related to the use of seismic arrays were carried out to improve mining
safety (Friedel et al., 1995; Friedel et al., 1996).
Lab tests were conducted to validate that microseismic events are important precursory
signatures of rock failure (Lockner, 1993). The explanation of this effect is that microseismicity is
triggered by the development of microcracks, which reduces the velocity of wave propagation in
the rock mass. The effects of microcracks on wave propagation were confirmed by rock physics
studies (Mavko et al., 2009). Additionally, pressures orthogonal to the direction of microcracks
tend to close the fractures, causing a rock healing effect to reinforce wave propagation.
Considerable lab studies were devoted to show how changing the stress field affects velocity of
wave propagation in rocks as well. It is concluded that the velocity at which a seismic wave travels
through a rock mass is related to the applied stresses (Mavko et al., 2009). By extending the
knowledge of lab studies to field scale, velocity tomograms can be studied to infer the stress
redistribution around an underground mine (Luxbacher, 2008). The stress redistribution related to
mining-induced seismicity was discussed to interpret the mechanisms of ground failures by
imaging the velocity structures of rock mass in underground mining (Westman, 2008).
The goal of this work was to investigate change of stress distribution associated with the
occurrence of major events in a hard-rock underground mine using double difference tomography.
Temporal changes in the velocity structure reflected the stress distribution evolving through time
and its response to major events. Specifically, significant tendency of stress change before the
major events could provide insights to forecasting major events and mitigating seismic hazards.
Due to the high seismicity rates and a good coverage of seismic monitoring system in the mine,
thousands of well-recorded microseismicity can be used to infer stress distribution in the
seismically active regions at the mine.
4.3 Data and methods
Microseismic events result from strain energy which has accumulated in the rock during
loading and are the sources used to generate the tomographic image of the velocity distribution
(Westman, 2003). Velocity tomography is conducted by using travel-time measurements of
mining-induced seismicity. Tomographic inversion can be used to relate travel time picks from
events to stations to the velocity distribution within the rock mass. Once velocity is estimated, the
44
|
Virginia Tech
|
stress distribution can be inferred by considering the velocity-stress relation, which is, high-
velocity bodies representing stress concentration and low-velocity bodies shown in the stress relief
regions. The travel times from microseismic events to recording stations were recorded by a
microseismic monitoring system. Major events were recorded by a strong ground motion system
in Creighton Mine. We analyzed 188250 travel times of P-wave from 13630 microseismic events,
the depth of which approximately ranged from 1450 to 2700 m, occurring from June 22nd to July
24th, 2011 at Creighton Mine. These data, along with the locations of the events and stations, were
required for tomographic inversion. Tomograms were used to compare the velocity change before
and after the period that included all major events (Table 4.1).
Table 4.1. Times, Locations, and Magnitude of Major Events
Major Events Date Time North (m) East (m) Depth (m) Magnitude
1 July, 6th 2011 8:41 AM 1927 1399 2332 3.1
2 July, 6th 2011 8:46 AM 1914 1478 2333 1.2
3 July, 6th 2011 8:47 AM 1879 1456 2284 1.3
4 July, 10th 2011 2:44 AM 1853 1385 2392 1.4
Velocity tomography is an effective analysis method to assess the change of stress distribution
over space and time. Tomograms are associated with velocity distribution, and are capable of
imaging the distribution of stress of the whole rock mass. It is well known that mechanical
properties of rocks are significantly influenced by the anisotropy, which is an important physical
characteristic of a rock mass. Rock properties including porosity, permeability, and elastic moduli
are sensitive to pressures that are applied in the rock (Mavko et al., 2009). The enhanced stress can
lead to an increase in bulk modulus, which is positively correlated with P-wave velocity. After
overcoming the dilatancy effect by increasing stress, the closure of preexisting fractures can
increase the elastic moduli. A higher elastic moduli directly causes an increase in bulk modulus.
Also, the closure of fractures offsets the elastic volume increase, giving rise to higher bulk modulus
(Holcomb 1981). This explains why bulk modulus increase with a higher stress. Assuming that
stress is the main factor leading to variations of velocity structure, a comparison between different
tomograms can provide a reference to the stress change over a spatial range or time. On the premise
that a rock mass is viewed as grossly linear, isotropic and elastic in rock physics, the empirical
relationship between density and velocity is given by
4
๐พ+ ๐
๐ = โ 3 ( 4-1 )
๐
๐
45
|
Virginia Tech
|
where V is the P-wave velocity, ๏ฒ is the density of rock, K is the bulk modulus, and ๏ญ is the shear
P
modulus (Mavko et al., 2009).
TomoDD, a double-difference seismic tomography code, was used to perform velocity
inversion (Zhang and Thurber, 2003). It requires the travel time difference for two similar raypaths.
The process of subtracting the travel times of nearby events can significantly improve the accuracy
of the location results and velocity distribution [11]. Equation (2) shows the computation for arrival
time T๐, expressed as a path integral,
๐
T๐ = ๐ +โซ๐ ๐ข๐๐ ( 4-2 )
๐ ๐ ๐
where ๐ฝ is the origin time of event i, k is the seismic sensor that record event i, u is the slowness
i
field, and ds is an element of path length. The event location is defined as the centroid of the
hypocenters.
๐๐
The double difference residual, ๐๐ , is defined as:
๐
๐๐๐๐ = (๐ก๐ โ๐ก๐ )๐๐๐ โ(๐ก๐ โ๐ก๐ )๐๐๐ ( 4-3 )
๐ ๐ ๐ ๐ ๐
where (๐ก๐ โ๐ก๐ )๐๐๐ is the difference in the observed travel-times between events i and j, and
๐ ๐
(๐ก๐ โ๐ก๐ )๐๐๐ is the difference in the calculated travel-times between events i and j. The
๐ ๐
tomographic inversion calculation develops a velocity model that minimizes the residual for all
event-receiver pairs.
A velocity model, which consists of grid points with assigned initial velocity values, is required
to perform velocity inversion. An initial background velocity was assigned to each node of the
velocity models. Two velocity models were created, one covering the entirety of the studied rock
mass and the other only covering the mining excavation region.
As can be seen in Figure 4.1, the nodes range beyond the limit of the mining area. This larger
velocity model covered all of the sensors that monitor seismicity. This ensured that the model
contained all relevant raypaths to provide sufficient information for velocity inversion.
Considering the ray distribution of the selected data, a grid with 24 ร 24 m node spacing covering
active mining area, combined with a horizontal grid with 48 ร 48 m node spacing covering external
46
|
Virginia Tech
|
4.4 Results
The correlation between microseismic events and the major events (moment magnitude > 1.0)
was analyzed. The accumulating number of microseismic events was plotted from July 4th to July
15th (Figure 4.3). Approximately 2500 microseismic events were recorded on July 6th, 2011, the
day of Major Events 1, 2 and 3. Also, the amount of microseismic events was considerable on July
10th, 2011 when Major Event 4 occurred, whereas less than July 6th, 2011. As can be seen in Figure
4.3, major events were associated with an increased amount of microseismic events.
Figure 4.3. Cumulative number of events
It is well known that microseismic events arise from the release of stored elastic energy in a
rock mass. The release of stored energy overcomes the force of crack resistance (a function of the
plastic behavior of the material at the crack tip and of its fracture characteristics) and creates the
surface of a new fracture (Griffith,1921). Meanwhile, the energy is consumed with the
development of a new crack surface. The surplus energy would turn into kinetic energy after a
crack propagation (Ba and Kazemi, 1990). Seismicity, including vibration or trembling, are caused
by the quick release of stored energy into kinetic energy. Consequently, microseismic events are
associated with fracture development and fractures appear to be near them.
We compared the difference in velocity structures before and after the major events. Then the
observed difference in velocity structure could potentially be used for forecasting the occurrence
of major events. By investigating the velocity change associated with the major events, passive
seismic imaging could contribute to improving safety in underground mining.
48
|
Virginia Tech
|
A. Coarse Grid Results
Microseismic events were divided into four data sets on the same time length (one week) to
investigate the possible velocity change associated with the major events. As shown in Table 4.2,
time frames were arranged consistently and included the same duration without overlapping before
and after major events.
Table 4.2. Time frames with major events occurrence
Time Frames Start End
First week June 22nd, 2011 June 29th, 2011
Second Week June 29th, 2011 July 6th, 2011
Major events period July 6th, 2011 July 10th, 2011
Third week July 10th, 2011 July 17th, 2011
Fourth week July 17th, 2011 July 24th, 2011
The velocity distribution during each of these time frames was determined by performing
double difference tomographic inversion. The major events were located close with horizontal
drifts with excavations and a vertical cross section crossed the center region of drifts (Figure 4.4).
Figure 4.4. The side view (top) and plan view (bottom) of the major
events in the study area. The main drifts are plotted and noted with
depth.Red balls: major events.
49
|
Virginia Tech
|
The cross-sectional images of the velocity model generated from tomographic inversion of the
travel time data for each of the four time periods are shown in Figure 4.5. The velocity distribution
changed through the different time frames involved with the occurrence of the major events. There
were a total of two weeks of travel time measurements before the major events. These two weeks
remained similar to the general velocity distributions (Figure 4.5). A low-velocity anomaly is
horizontally exhibited in the images for both weeks along the 7530L (2295 m), which had a lower
density due to drift and stope excavations. The other low-velocity areas, originally scattered below
the ends on north and south of the 7530L, connected and developed to a large low-velocity region
in 7530L from the first week to the second week. The existence of fractures might impede the
wave propagation of this region. Also, a high-velocity region was growing below the south end
from 7810L (2380 m) to 7680L (2340 m), approaching the regions of major events. Note that the
shift of the low velocity to high velocity indicates stress increases over that area.
The velocity distribution over the two weeks before the major events showed that stress
increased above and below the excavation region, which, in contrast, was originally dominated by
the modest velocity. The images from the first week to the second week indicated a decent
repeatability, providing a measure of confidence in the tomography results without influence by
occurrence of major events.
All of the major events occurred on July 6th and July 10th. As shown in Figure 4.5, the velocity
distribution changed significantly after major events, especially on the regions adjacent to major
events.
50
|
Virginia Tech
|
Figure 4.5. Tomography studies on large velocity model. Red trigangles: sensors in the seismic
network.
First, the low velocity band around the 7530L shown on tomograms of the first and second
week diminishes to a small spot on the north end of 7530L in the third week. Second, the high
velocity anomaly originally located on the south end of the 7680L before the major events moved
upwards vertically and was shown on the third week. Also, it stretched out to the north following
the drift of 7680L on the fourth week. The overall velocity eventually achieved a greater level in
the two weeks around drifts before the major events. It is inferred that stress concentrated near the
coming major events prior to the occurrence of them. With approaching to the occurrence of Major
Event 1, stress concentration overcame stress relief in the principal part around Major Event 1.
The intensified stress concentration and relief in different areas are the main changes associated
51
|
Virginia Tech
|
with the occurrence of the major events. Observations imply that the major events are more likely
to occur within the highly-stressed area.
Besides pronounced changes of the velocity structure with major events, it continued to alter
from the third week to the fourth week. The results of the third and fourth weeks (Figure 4.5)
showed this tendency. The high-velocity areas of the second week diminished significantly and a
high-velocity anomaly was observed between 7680L and 7810L in the third week (Figure 4.5). A
high-velocity region below 7680L, which appeared in the third week, expanded on both the north
and lower zones from the third week to the fourth week. Most of the largest low-velocity region
below 7810L in the third week was replaced by the extending of neighboring moderate-velocity
body and high-velocity body in the fourth week. In addition, a large low-velocity area was present
above the south end of 7530L in the fourth week. By comparing the first week to the fourth week,
we investigated that the occurrence of major events was associated with an increase in velocity
from 7530L to 7680L. It is inferred that highly-stressed areas eventually extended on the zones
adjacent to 7680L and 7810L in the fourth week. Stress relieved on the zones above the south end
of 7530L in the last week. The high-velocity anomalies around the excavations suggested that
stress concentration was enhanced during the third and fourth weeks.
B. Fine Grid Results
To evaluate the effect that major events had on the state of velocity in sufficient detail, we
performed tomographic inversion using a finer velocity grid. A small velocity model only
concentrating on the area from 7530 L to 7810 L was developed using the identical time frames as
the coarse grid spacing analyses. The distance between neighboring grid nodes was 9.1 m in every
principal direction for the small-scale model. By using finer grid spacing, the tomographic images
had better resolution and included additional details.
The set of tomograms corresponding to the small-scale velocity model was shown in Figure
4.6. It was arranged in the same pattern as that of the results computed on the large-scale velocity
model. As with the initial, coarser model, the similar velocity distribution around the mining area
maintained from the first week to the second week.
The region around the drifts was mainly governed by the growing high-velocity portion after
the occurrence of the major events. Also, the low-velocity zone around the 7810L drift of the
second week disappeared and was replaced by a large high-velocity region, causing an imbalanced
52
|
Virginia Tech
|
distribution of velocity. Another velocity change was observed on 7530L from the second week to
the third week. The low-velocity area of the south end of 7530L increased and exceeded the
average velocity level in the third week. The changes between these two weeks gave the evidence
that the stress concentrated on the region of excavation with the occurrence of major events.
There was a substantial velocity distribution change from the third week to the fourth week, as
shown in the tomographic images (Figure 4.6). Regions around the mining area experienced a
significant increase in velocity from the third week to the fourth week. Some regions still
maintained low velocity, but most of the area around the drifts was dominated by high velocity.
The significant change from the third week to the fourth week suggested that stress redistribution
might continue over several weeks after major events. The additional noticeable change was that
stress concentrated in the north end of the region from 7530 L to 7680 L, where there was originally
an area of low stress in the first week. We compared the tomograms of the first week and the last
week and concluded that the overall stress of the last week was higher than the first week.
For the results of the small-scale velocity model, there was a similar trend with the change for
the large-scale velocity model. The average velocity of the mining area increased substantially
from before the major events to after the major events, as shown in Figure 4.6. The principal drifts
in the mining area experienced the most dramatic change. It took multiple weeks of monitoring to
observe the new stability. During the third week after the occurrence of major events, a high-
velocity area dominated near the major events locations. Low-velocity areas appeared on the north
of drifts and were away from the location of major events. In the fourth week, the high velocity
region experienced a trend of moving away from the center of drifts and low velocity areas
distributed more extensively around the drifts.
53
|
Virginia Tech
|
continued to maintain a high level near excavations in mines after major events. These findings
confirm that highly stressed regions exist surrounding the major events in rock bursts. These
regions experience significant stress redistributions before and after occurrences of major events.
The regions around the mine drifts indicated a great contrast between low and high velocities
associated with the occurrence of major events and the active microseismicity. The change in
velocity structure validated the effect of redistributed stresses. For example, the switch from low
velocity to high velocity suggested stress concentration. An increase in fracture density of a region
decreased the velocity where the seismic waves traveled through. Consistencies on the velocity
distributions before the occurrence of major events proved the stability of stress distribution
without the influence of major events.
Stress concentration and stress relief usually occurred simultaneously in close proximity. They
both were intensified, affected by the active microseismicity and the major events. We inferred
that the occurrence of major events significantly altered the stress distribution. In addition to the
main consistency of stress distribution, the highly-stressed region appeared in the nearby of major
events. After the major events, pronounced stress concentration appeared in the vicinity of major
events. It was observed stress relief around the drifts and low stress areas were monitored close to
the drifts within two weeks after the major events. We determined that stress concentration
preceded the occurrence of major events from the results. However, whether major events were
the main factor that caused the growth of stress concentration after the major events still needs
further investigation. One limitation of the analysis is that the time frames used to generate
tomograms might be arbitrary. The time scale of one week is reliable to provide enough raypaths
coverage for double-difference tomographic surveys. However, the different amount of
microseismic events in each data set could cause a potential bias in the tomographic surveys. In
order to offset such bias, we propose to compare the results from data sets including the same
amount of seismic events in future studies. Also, one proposed solution is to use overlapping time
frames to exhibit better consistent change of the stress distribution combined with using the same
amount of seismic events in each data set.
We also noticed that seismic rate was significantly boosted with the occurrence of major
events, but it cannot be simply used for accurately predicting major events. Whereas a growing
high seismic rate was observed preceding Major Event 1, 2 and 3, no significant seismic rate
55
|
Virginia Tech
|
Vanorio, T., Virieux, J. Capuano, P. and Russo, G., 2005. Threeโdimensional seismic tomography
from P wave and S wave microearthquake travel times and rock physics characterization
of the Campi Flegrei Caldera. Journal of Geophysical Research: Solid Earth, 110.
Westman, E.C., 2004. Use of tomography for inference of stress redistribution in rock. Industry
Applications, IEEE Transactions 40, 1413-1417.
Westman, E., K. Luxbacher, and P. Swanson., 2008. Local Earthquake Tomography for Imaging
Mining-Induced Changes Within the Overburden above a Longwall Mine. The 42nd U.S.
Rock Mechanics Symposium, San Francisco, CA.
Young, R. and S. Maxwell, 1992. Seismic characterization of a highly stressed rock mass using
tomographic imaging and induced seismicity. Journal of Geophysical Research: Solid
Earth (1978โ2012) 97(B9): 12361-12373.
Zhang, H. and C. Thurber, 2003. Userโs manual for tomoDD 1.1 (double-difference tomography)
for determining event locations and velocity structure from local earthquakes and
explosions. Department of Geology and Geophysics University of WisconsinโMadison.
Zhang, H. and C.H. Thurber, 2003. Double-difference tomography: The method and its application
to the Hayward fault, California. Bulletin of the Seismological Society of America 93,
1875-1889.
Metal/Nonmetal daily fatality report (U.S. Department of Labor, Mine Safety and Health
Administration, Washington, DC, 2013)
58
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.