University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Virginia Tech
|
fibers is limited to the capacity and selectivity of the extracting phase. In order to use SPME as a
sampling method in the field, the tip must be sealed, but storage time may be extended by
keeping the fiber cool [79]. The application of SPME as a sampling mechanism in underground
mines warrants further investigation to prove that a method can be developed using a GC-ECD
to achieve accurate and reproducible quantification of tracer gases considering the precision
inherent to the sampling method and the ease with which samples may be collected and stored.
4.3 Experimental Apparatus
A Shimadzu 2014 GC with an electron capture detector (ECD) was used to quantify
analytes during this study. Helium serves as the carrier gas with nitrogen as the make-up gas.
GC Solutions© serves as the software to display and analyze the chromatographic results. A
porous layer open tubular (PLOT) alumina chloride column is used for separation of tracer gases.
The column is a capillary column allowing for rapid analysis times. The column name is HP-
AL/S as supplied by Agilent; column specifications include length of 30 meters (m), inner
diameter of 0.250 millimeters (mm), and film thickness of 5 micrometers (µm). Glass liners
with 0.75 mm inner diameter are placed in the injection port to enhance sample recovery from
the SPME fiber and to ensure a high linear flow rate [80].
4.4 Fiber Selection
All SPME fibers contain one of three types of solid core: fused silica, Stableflex, and
metal. Fused silica cores are favored due to their ability to endure high heat and general
durability. Stableflex fibers are more flexible than the traditional fused silica fiber due to their
plastic coating. Metal fibers reduce extraneous peaks due to the lack of adhesive required to
attach the extracting phase to the fiber. The polymeric extracting phase is the element which
42
|
Virginia Tech
|
absorbs the sample. Each type of core is coated with an extracting phase based on the analytes
of interest. The extracting phase is selected based on adsorptive qualities, polarity, and capacity.
Polydimethylsiloxane (PDMS) was selected as the extracting phase for the analysis of
tracer gases employed in mine surveys. PDMS is a high viscosity, rubbery liquid which extracts
sample via absorption rather than adsorption [65]. PDMS was selected because it has a universal
nature with regard to the analytes it can collect. This universal nature is largely due to the non-
polar quality of PDMS, which also supplies this extracting phase with a longer life relative to
more polar phases. As a non-polar extracting phase, PDMS is sensitive to non-polar analytes
[81] such as SF . As an absorbent extracting phase, PDMS achieves sample extraction by the
6
partitioning of the analytes into the phase, rather than adsorption onto the surface area of the
fiber [82]. This absorbent quality of the fiber increases the capacity of the fiber to collect a
larger volume of sample, but it also increases the amount of time required for sample extraction
and desorption.
The thickness of the PDMS phase was optimized via experimentation of three
commercially available coating thicknesses; 7, 30, and 100 µm. Each thickness was used for the
analysis of a 4 parts per million (ppm) by volume standard of sulfur hexafluoride (SF ) in
6
nitrogen. This standard was created by first filling a Tedlar bag with ultrapure SF . Then a gas-
6
tight glass syringe was employed to inject 1.2 microliters (µL) of pure SF taken from the Tedlar
6
bag into a 275 milliliter (mL) glass sampling bulb with a dual stopcock. The resultant
concentration is 4 ppm by volume. The effect of the coating thickness was observed at the onset
of experimentation to select the most encouraging fiber with a method similar to the method
employed for syringe injections of SF standards on the same system. The GC method used to
6
observe the impact of extracting phase thicknesses when using SPME as a sampling technique is
43
|
Virginia Tech
|
described in Table 2. Methods were then adjusted slightly as necessary for each coating
thickness.
Table 4-1: GC Parameters for SF Syringe Injections
6
Split Injector Temperature 230 degrees Celsius (⁰C)
Split Ratio 20:1
Total Flow 26.1 milliliters per minutes (mL/min)
Column Flow 1.15 mL/min
Linear Velocity 30 centimeters per second (cm/s)
Septum Purge Flow 2 mL/min
Detector Temperature 200⁰C
Initial Column Temperature 67⁰C hold for 2.75 min
Column Ramp Temperature 40⁰C/min
Finial Column Temperature 180⁰C hold for 0.5 min
Total Program Time 6.07 min
The first coating thickness observed was the 7 µm PDMS fiber. All results were
unsatisfactory with the thinnest coating. Separation of oxygen and SF was not achieved,
6
eliminating the ability to quantify peak area.
The second coating thickness observed was the 30 µm PDMS fiber. The results observed
with the 30 µm fiber were very similar to the results observed with the 7 µm fiber. Separation
between oxygen and SF was never achieved.
6
Successful chromatography results were achieved employing the 100 µm PDMS SPME
fiber. Increasing the thickness of the extracting phase increases the capacity of the fiber to
collect sample, making fiber more sensitive [83]. SF was separated from oxygen for all applied
6
methods, to varying degrees of success. An image of the successful separation of SF and
6
oxygen is displayed in Figure 4-1. The method used was isothermal at 67⁰C with a linear
velocity of 24 cm/s and a split ratio of 20:1. The 100 µm coating was selected as the final fiber;
further optimization is discussed in the following section.
44
|
Virginia Tech
|
uV(x1,000)
5.5
Chromatogram
5.0
4.5
4.0
3.5
3.0
2.5
SF
2.0 6
1.5
O
1.0 2
0.5
0.0
-0.5
-1.0
0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 3.75 min
Figure 4-1: Successful Separation of Oxygen and SF6 Using a 100 µm PDMS Fiber
4.5 Method Optimization
The degree of success of this GC method is determined by the degree of separation of SF
6
from oxygen and the peak symmetry for the successful integration of peak areas. SPME
sampling methods will typically display peak tailing as a result of the relatively slow desorption
of analyte(s) from thick fibers [84].
The GC method was optimized to achieve a reliable chromatographic result when using
SPME. The parameters which were adjusted include: depth of the fiber within the injector port
during desorption; split versus splitless injections; linear velocity of the carrier gas; and the
column temperature. A 5 ppm by volume standard of SF in nitrogen was created in the same
6
manner as previously discussed, altering the injection volume of SF to 1.25 µL, to be used
6
during method optimization.
The first parameter optimized was the depth of exposure of the SPME fiber within the
injector port during sample extraction. Sample release from a SPME fiber is achieved through
thermal desorption. In order to achieve successful desorption, a glass liner with an inner
diameter of 0.75 mm without glass wool was used. The fiber should be exposed in the middle of
45
|
Virginia Tech
|
the glass liner to optimize sample recover [80]. The depth of exposure of the fiber within the
glass liner at the injector port was first optimized by utilizing the measurement tool on the side of
the Supelco SPME fiber holder. Chromatograms were obtained at depths of 3, 4, and 4.4 units,
with 4.4 being the maximum depth of exposure. The sensitivity increased with depth of
exposure, and, after observing the resultant chromatograms overlaid in Figure 4-2, the optimum
depth of exposure was 4.4 units as indicated by the Supelco SPME fiber holder.
The SPME fiber was then used to make injections of the SF standard at various injector
6
port temperatures ranging from 260-270⁰C. The impact of varying the injector port temperature
in this range is minimal. Increasing the injector port temperature results in a slight increase in
sensitivity of the GC-ECD to the analytes. As such, an injector port temperature of 270⁰C was
selected based upon the slight increase in sensitivity.
uV(x10,000)
2.1
2.0
SF
1.9 6
1.8
1.7
1.6
1.5
1.4
1.3
1.2
1.1
1.0
0.9
0.8
O
0.7 2
0.6
0.5
0.4
0.3
0.2
0.1
0.0
2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 min
Figure 4-2: Overlay of Injections at Unit Depths of 3 (black), 4 (pink), and 4.4 (blue) units
46
|
Virginia Tech
|
The second set of parameters to be optimized was the benefit of split versus splitless
injections, as well as the optimization of a sampling time in the case of splitless injections.
When using a capillary column, smaller split ratios increase chromatographic sensitivity. With
the small volume of sample injected from a SPME fiber, it is imperative to enhance sensitivity
wherever possible. Initial testing detailed in the Fiber Selection portion of this paper displays
chromatographic results when employing a method with split injection. Previously referenced
chromatograms had a common problem of a high degree of tailing; for these reasons,
development of a splitless method was emphasized. Splitless injection means for a fixed period
of time, the split valve is closed, thus all sample vapors pass directly into the column, optimizing
sensitivity. Two parameters must be optimized for a splitless injection: 1) the sampling time,
which is the time delay prior to opening the split valve and: 2) the split ratio after the valve is
opened. The impact of opening the split valve 1.00, 0.75, 0.50, 0.25, and 0.10 minutes following
the injection was observed. Reducing this sampling time dramatically reduces the presence of
peak tailing. For these tests a split ratio of 30:1 was employed. 30:1 was selected for the fact
that proper chromatographic practices encourage the use of split ratios between 20:1 and 100:1
[7], as well as to encourage the flow to sweep efficiently through the injector port once the
splitter is engaged. Figure 4-3 displays the isolated view of the selected sampling time of 0.1
minutes where separation of oxygen and SF has been achieved, and tailing of SF has been
6 6
eliminated.
47
|
Virginia Tech
|
uV(x10,000)
5.5
Chromatogram
5.0 SF 6
4.5
4.0
3.5
3.0
2.5
2.0
1.5
O
1.0 2
0.5
0.0
-0.5
0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 3.75 4.00 4.25 min
Figure 4-3: Chromatogram when Employing a Sampling Time of 0.1 min
The linear velocity of the carrier gas was the next parameter to be optimized. When
using helium as a carrier gas, the optimum flow rate is 24 cm/s as derived from the Van Deemter
plot [7]. When using traditional syringe injections to introduce sample to a GC, the linear
velocity may be increased to decrease analysis time without largely impacting efficiency.
Increasing linear velocity from the optimum point when using SPME, however, degrades
analysis, making a linear velocity of 24 cm/s the best option. Figure 4-4 displays the overlay of a
chromatogram with a linear velocity of 24 cm/s and a linear velocity of 40 cm/s, with all other
parameters the same. The analytes clearly elute sooner when using a faster linear velocity;
however, separation of oxygen and SF is degraded to a degree that results are not reproducible.
6
48
|
Virginia Tech
|
uV(x1,000)
6.0
5.5
5.0
4.5
4.0
3.5
3.0
2.5
2.0 SF 6
1.5
1.0 O
2
0.5
0.0
-0.5
0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 3.75 min
Figure 4-5: Overlay of Chromatograms from 50⁰C (Black), 67⁰C (Blue), and 100⁰ (Pink)
Temperature programming methods were then tested following the unsatisfactory results
with isothermal methods. Previous work with SPME indicates that temperature programming
with a colder column initially usually improves chromatographic results [85]. An initial column
temperature of 35⁰C was employed. The final column temperatures observed were 67⁰C, 80⁰C,
90⁰C, and 100⁰C. Increasing the final temperature decreases the time of elution of SF , but
6
sacrifices the degree of separation between oxygen and SF peaks. The optimal final
6
temperature was 80⁰C.
The final method parameter optimized was the time to initiate the heating of the column
during the temperature program. The column was heated as rapidly as possible, so the ramp rate
is set to 100⁰C/min. This ramp was initiated at 0.05, 0.15, and 0.5 minutes. The temperature
ramp initiation time has a slight, consistent impact on the time of elution of SF . The longer
6
temperature ramp initiation times directly result in a later time of elution of analytes as compared
to shorter time intervals before the column temperature is increased. A time of 0.05 minutes was
selected for this parameter. A resultant chromatogram of a 5 ppm volumetric standard of SF in
6
50
|
Virginia Tech
|
caution must be exercised due to the risk of overloading the fiber. The equilibrium curve
developed will provide the safe range of exposure times to ensure reproducible results when
sampling with SPME. Equilibrium is achieved when additional sampling time does not permit
the collection of more sample [86], reaching a plateau in the detector response to the observed
concentration. The time of equilibrium can be reduced by kinetic reactions [87], so this paper
details the development of an equilibrium curve under static conditions, resulting in the longest
equilibrium time.
A 100 µm PDMS SPME fiber was used in the development of the equilibrium curve. A
standard of 5 ppm by volume SF in nitrogen was created in the same fashion as previously
6
discussed. The fiber was exposed to the standard at various increments of time, as timed by a
stopwatch. The fiber was moved directly from the standard to the injector port at the start of
each run and remained inside the injector port for the duration of each run. SPME fiber
assemblies designed for field work are gas tight; however, the assemblies used to execute the
detailed experiments were designed for laboratories. As such, the exposure time of the SPME
fiber assemblies used to air was limited. The test was repeated three times for each exposure,
allowing for the calculation of percent relative standard deviation (%RSD). Resultant %RSD
values below 10% indicate high precision. Table 4 displays the numerical results of the
equilibrium curve, while Figure 4-7 depicts the developed equilibrium curve. A third order
polynomial trend line fits the data with a satisfactory R2 value of 0.98 as displayed in Figure 4-7.
52
|
Virginia Tech
|
4.7 Calibration Curve
A calibration curve is essential for the quantification of results when using a GC. The
calibration curve described contains four standard concentrations of SF in nitrogen. The similar
6
transfer efficiencies of SPME injections and syringe injections allow for the development of a
calibration curve with standard syringe injection [88], but a calibration curve derived from
SPME injections was developed to eliminate the need to correct for the slight difference in
transfer efficiencies. The four concentrations studied are 0.36, 0.73, 5.5, and 10.0 ppm by
volume. These concentrations were selected due to the readily measurable volume of SF
6
required to create the standard. These standards were developed in the same manner as
previously discussed. A sample calculation is provided to detail how the volumetric
concentration is derived from a known injection volume in Equation 4.1.
Equation 4.1
Each standard was analyzed three times. All injections are executed manually. A %RSD
value of less than 10% indicates satisfactory results. Four standards of SF in nitrogen were
6
created by injecting 0.1, 0.2, 1.50, and 2.75 µL SF into the glass bulb to develop standards at
6
concentrations of 0.36, 0.73, 5.5, and 10.0 ppm, respectively. Following the third injection of
each standard, the % RSD was calculated. The resultant % RSD values were 7.49% for 0.36
ppm, 3.83 % for 0.73 ppm, 1.557 % for 5.5 ppm, and 1.111% for 10.0 ppm. Higher %RSD
54
|
Virginia Tech
|
values are encountered for lower concentrations; for this reason, standards containing 0.1µL and
0.2µL of SF were used to increase confidence in the lower portion of the calibration curve. A
6
linear trend line fits the data defined by the following equation:
. An R2 value of 0.9994 indicates a high degree of confidence in
the precision of the curve.
4.8 Conclusions
SPME has inherent characteristics making it a desirable option for sampling in the
mining industry. Primarily, SPME is a precise method in the hands of a variety of users for rapid
sampling of analytes present in low concentrations, and is highly portable in a rugged
environment. When applying SPME to sampling during underground mine ventilation surveys,
sensitivity will be increased by using a SPME fiber with a PDMS bonding phase. The thickest
coating of 100 µm of extracting phase is the optimum thickness to allow for sufficient recovery
of the non-polar analytes without being overloaded with oxygen from air. Recovery of sample
during analysis with a GC-ECD system can be optimized with the use of a splitless method with
a short sampling time, an initially cool column with a rapid ramp of temperature, and a moderate
linear velocity of the carrier gas.
When using SPME for quantification of analytes, two items are necessary to properly
analyze samples: an equilibrium curve and a calibration curve. The equilibrium curve indicates
the time period where sampling can be optimized via time of exposure of the fiber to the sample
and provide the most reproducible results. A proper equilibrium curve will be fit with a third
order polynomial equation to allow for the initial linear relationship between time of exposure of
the fiber to the sample and peak area, and then to indicate the time where the fiber is saturated.
In practice, the SPME fiber should be exposed to the sample for the same amount of time to
55
|
Virginia Tech
|
5 Solid Phase Microextraction Sampling Under Turbulent Conditions and for
the Simultaneous Sampling of Multiple Trace Analytes
5.1 Abstract
Solid phase microextraction (SPME) is a solvent free method of sample extraction.
SPME is an appealing method for sample collection because it is designed for the sampling of
trace level analytes with short sampling times in a variety of environments. Additionally, SPME
can be used to directly deliver sample to a gas chromatograph (GC) for analysis by means of
thermal desorption with minimal training. An optimized method for SPME sampling of SF
6
under static conditions was developed in previous work. In this paper, that work is expanded to
investigate turbulent conditions under varying flow rates. Additionally, the competence of
SPME sampling for simultaneous analysis of multiple trace analytes is evaluated under static
conditions. This work is discussed in the context of underground mine ventilation surveys, but is
applicable to any industry in which ventilation circuits must be evaluated.
5.2 Introduction
The typical aims of occupational ventilation systems are to provide a comfortable
working atmosphere, and to exhaust airborne contaminants when necessary. Ventilation surveys
are applied to a wide variety of industries to assess exposure and design of ventilation circuits, to
assess pathways for toxins, and to characterize existing ventilations circuits to design
improvements. One common method of conducting ventilation surveys is with the aid of tracer
gases which can be applied to a variety of ventilation circuits, in terms of complexity and scale.
The circuit observed may be as large and unrestricted as open land with irregular terrain to
determine the expected impact of an accidental release of harmful gases [89], or limited to a
circuit as small as a university teaching lab where air movement is minimal [90]. A popular
57
|
Virginia Tech
|
tracer gas is sulfur hexafluoride (SF ), which is used because it is inert, non-toxic, non-corrosive,
6
and detectable at low concentrations.
SF has been used in many applications for ventilation surveys. For example, it was
6
employed to determine failures in a ventilation system in a garment manufacturing facility which
allowed for one individual infected with tuberculosis to be detected [91]. Additionally, it was
used to evaluate engineering controls intended to reduce worker exposure to metalworking
fluids. Finally, SF was used to determine the flow rate and capture efficiency of the ventilation
6
circuit[92]. These examples give only a cursory review of the many applications of tracer gases
in consideration of occupational health and safety; they are also considerably useful when
evaluating underground mine ventilation systems.
Mine ventilation surveys are conducted in underground mines to gain knowledge about
the existing ventilation system and to provide information in the case of emergencies [11]. The
aim of a survey may be to measure air quantity, velocity, pressure, and/or temperature. The
measurement of air quantity is typically obtained through air velocity surveys and cross-sectional
area measurements. Then, Equation 5.1 can be applied to compute volumetric airflow rates, Q,
typically expressed in units of m3/s (or ft3/min). V is velocity in m/s (or ft/min), and A is the
cross-sectional area in m2 (or ft2). However, in situations where the cross-sectional area or
velocity cannot be directly measured (e.g., inaccessible regions), air quantity must be measured
using tracer gases – and SF is indeed the industry standard [2]. However, a major challenge of
6
SF surveys, in both mining and other applications, is sampling. A durable, low cost air sampling
6
method that requires minimal training is needed.
Equation 5.1
58
|
Virginia Tech
|
5.2.1 SPME as Viable Sampling Method
Solid phase microextraction (SPME) is a sampling method designed to facilitate rapid,
on-site sample gas extraction by sorbing analytes onto a polymeric extracting phase [79]. A
SPME fiber assembly consists of a silica fiber core coated in a polymeric extracting phase
encased in a septum piercing needle. A plunger allows for the fiber to be easily expelled from
and withdrawn back into the needle, which serves as a means to access samples and to protect
the fiber. SPME has been extensively applied in physicochemical, environmental, food, flavor,
fragrance, pheromone, pharmaceutical, clinical, and forensic applications [93]. SPME
technologies have also been developed as an alternative technique to evaluate worker exposure
to benzene, toluene, ethylbenzene, and xylene (BTEX). In a recent BTEX study, SPME was able
to fulfill new requirements for both sample detection limits and sampling times better than
traditional methods [94]. As a passive sampling method that requires little to no sample
preparation, SPME has also been applied in on-site exposure assessment situations to achieve
precise analysis where skilled GC operators were not always available [95].
Previous works with SPME have identified it is a robust sampling method. A method has
been developed to optimize sensitivity of the SPME fiber during sampling extraction and GC-
ECD analysis [96]. The direct application of SPME to many types of ventilation systems,
particularly mine ventilation systems, is largely dependent on whether the method can be used to
precisely quantify tracer gas in turbulent flow.
Sample extraction by SPME is achieved by employing the septum piercing needle to
access the sample, placing the needle and fiber directly inside of the sample matrix. SPME can
be applied to the sampling of liquid, gas, and headspace matrices. The plunger is then used to
expel the fiber from the needle to expose the fiber directly to the sample. The fiber is exposed to
59
|
Virginia Tech
|
the sample for a predetermined amount of time to allow for equilibrium of analyte to be achieved
within the fiber, the sample matrix, and the headspace of the container in which the sample is
held [79]. Sample extraction with SPME is influenced by agitation of the sample matrix,
temperature, pH, and salt. While sampling in typical mine atmospheres, pH and salt are not
significant factors. While temperature can certainly vary in mines, the temperatures for the
discussed laboratory experiments were controlled to reduce factors impacting SPME sample
sorption. Air sampling in turbulent conditions has a parallel impact on sample extraction with
SPME as stirring or agitation in a liquid sample. Stirring encourages shorter equilibration times
because diffusion of analytes toward the fiber is enhanced [97].
In addition to determining the feasibility of applying SPME to sampling in turbulent
conditions, the ability of SPME to extract multiple analytes simultaneously must be observed.
This is largely due to the fact that while tracer gas analysis is a useful method for assessment of
ventilation circuits at many scales, the method can be a relatively slow means of assessment.
The speed of the survey is hindered by analysis times and tracer gas background reduction time
in the case of multiple surveys. The use of multiple tracer gases, however, might allow for a
more flexible ventilation survey. Perfluorocarbon tracers, specifically
perfluoromethylcyclohexane (PMCH), may be analyzed with SF with minimal alterations to the
6
GC-ECD protocol [96]. The ability to apply SPME to the simultaneous measurement of the two
tracers has not been investigated to date, and is expected to be heavily dependent on the
sensitivity of the PDMS polymeric extracting phase to PMCH.
5.3 Testing Objectives
In this study, the practicality of using SPME to sample directly from a ventilation system
was considered based on the ability of SPME to extract sample from a turbulent air stream.
60
|
Virginia Tech
|
Experiments were conducted in order to elucidate the impact of turbulence on SPME sample
recovery and required sampling time. Additionally, SPME experiments were conducted to
determine the efficacy of using SPME to simultaneously sample SF and PMCH under static
6
conditions.
5.4 Experimental Methods
5.4.1 SPME Sampling Under Turbulent Conditions
The first portion of testing was performed to determine how turbulence affects SPME
sample recovery by comparing samples collected under turbulent conditions to those collected
under static conditions. A turbulence vessel was constructed as depicted in Figure 5-1. A fully
developed turbulent atmosphere will allow for flow to be mixed uniformly in the inlet and the
outlet of the apparatus [98], thus preventing inconclusive sampling as a result of poor mixing or
layering at the outlet. An air pump was connected to push air through the turbulence vessel in a
blowing-type system. A variable area flow meter controlled the amount of air pushed into the
system. A separate fan was placed inside the turbulence vessel to encourage the thorough
mixing of gases within the chamber. The tracer gas used during testing was sulfur hexafluoride
(SF ), which was ultrapure and contained inside a compressed gas cylinder. A mass flow
6
controller was connected in series with the gas cylinder to control the exact amount of SF
6
released. While copper tubing was used to connect the cylinder to the mass flow controller, all
other connections were made with flexible plastic tubing. SF joined the flow of air into the
6
vessel at the inlet, and a sampling port was located at the outlet of the vessel. The sampling port
was capped with a septum, which allowed for sample extraction from the turbulent flow stream
with either SPME fibers or via evacuated containers.
61
|
Virginia Tech
|
Figure 5-1: Turbulence Vessel Configuration
The Reynolds number was calculated first to indicate if turbulent conditions existed in
the inlet and the outlet of the vessel. Before this calculation, however, the actual air density was
first computed [1]. This was achieved by using a sling psychrometer to measure the wet and dry
bulb temperatures, t and t , respectively, in the room, along with the barometric pressure, P.
w d
Unless otherwise specified, all calculations employ temperature units of degrees Celsius and
pressure units of Pascals (Pa). The wet saturation vapor pressure, e , and the latent heat of
sw
evaporation, L , were calculated using the wet and dry bulb temperatures and barometric
w
pressure. The equations for wet saturation vapor pressure and the latent heat of evaporation are
given in Equations 5.2 and 5.3, respectively.
(Pa) Equation 5.2
(J/kg) Equation 5.3
Determination of the wet saturation vapor pressure and the latent heat of evaporation
allow for calculation of the specific humidity, X , which can in turn be used to evaluate moisture
s
content, X. Equation 5.4 is used to calculate specific humidity, and Equation 5.5 is used to
calculate moisture content.
62
|
Virginia Tech
|
(kg/kg dry air) Equation 5.4
(kg/kg dry air) Equation 5.5
Knowing the barometric pressure and the moisture content, the actual vapor pressure, e,
was then calculated (Equation 5.6) – and ultimately, the actual air density, ρ, could be calculated
(Equation 5.7) The actual air density allows for a more accurate evaluation of the Reynolds
number.
(Pa) Equation 5.6
(kg air/m3) Equation 5.7
A Reynolds number greater than 4,000 indicates turbulent flow, and, in turn, the thorough
mixing of gases [99]. The calculation to determine Reynolds number is given in Equation 5.8
where is density of air in units of kg/m3, is the velocity based on the cross-sectional area of
the airway in units of m/s, is the hydraulic diameter (in the case of circular pipes is the
actual diameter) in m, and is the dynamic viscosity in kg/m/s. The velocity of air is calculated
using the cross-sectional area of the outlet and the volumetric flow rate indicated by the flow
meter (i.e., via manipulation of Equation 5.1). With a calculated air density of 1.21 kg/m3, the
minimum flow of air necessary to achieve a turbulent atmosphere within the outlet is 44 L/min.
Equation 5.8
The mass flow controller releases SF in units of standard cubic centimeters per minute.
6
In order to determine the expected concentration of SF at the outlet of the turbulence vessel, a
6
series of calculations were performed. The mass flow controller is designed to release gas at a
standardized temperature and pressure, and is calibrated as such; therefore, the actual flow rate of
63
|
Virginia Tech
|
SF being released must be determined using the ideal gas law [100]. The ideal gas law is
6
applied as shown in Equation 5.9 to find the volume of SF in the laboratory (V ) from the
6 2
known volume of SF in the mass flow controller (V ). As indicated in the user manual of the
6 1
mass flow controller, the temperature and pressure values of SF within the controller are known
6
to be 14.696 psia and 25⁰C, respectively [101]; the temperature and pressure of air were
measured in the laboratory. Once the calculation for the volume of SF is complete, the actual
6
flow of SF from the mass flow controller may be determined. P values in Equation 5.9 are
6
representative of pressures in units of standard atmosphere (atm), V is volume in liters (L), and T
is temperature in Kelvin (K); when observing the difference in volume of gas between two states,
the ideal gas law can be reduced to eliminate the number of moles of gas, n, and the gas constant,
R, since these quantities will remain constant. Once determining the value V and subsequently
2
the volumetric air flow rate, the concentration of SF in the outlet could be calculated using
6
Equation 5.10.
Equation 5.9
Equation 5.10
The above series of calculations provides a means to determine the concentration of SF
6
present within the sampling port of the turbulence vessel. The results of these equations were
continually reevaluated with changes in air density. The amount of SF released into the vessel
6
was varied to alter the concentration in the outlet. The flow of air pumped into the vessel could
also be varied to this end, as well as to control the Reynolds number. Thus, multiple scenarios
could be developed in the sampling port, which had the same concentration of SF with varying
6
Reynolds numbers.
64
|
Virginia Tech
|
5.4.2 Simultaneous Measurement of Multiple Tracer Gases under Static Conditions Using
SPME
The second portion of testing required a much different apparatus set up due to the nature
of the experiments. The sampling of multiple tracer gases was performed under static
conditions. A standard of SF and PMCH was mixed in a 275 mL glass bulb with dual
6
stopcocks. The glass bulb was flushed and then filled with ultrapure nitrogen. An injection of
pure SF gas was then made into the glass bulb. An injection of pure PMCH in vapor state was
6
subsequently injected into the same glass bulb. The PMCH was taken from a vial which had
previously been evacuated of atmosphere and then injected with a mass of liquid PMCH. The
liquid PMCH was allowed to vaporize, filling the headspace of the formerly evacuated container
with pure PMCH vapor. In order to determine the ability of SPME to sample multiple tracer
gases simultaneously, 15 µL each of SF and PMCH were injected into the glass bulb. This
6
protocol ensured the presence of the same volumetric concentration of SF and PMCH in the
6
standard, which allowed for an evaluation of the sensitivity of the PDMS extracting phase of the
SPME fiber to both tracer gases, simultaneously.
5.5 Results
5.5.1 Turbulent Conditions
The impact of using a 100 µm PDMS SPME fiber to extract sample in a turbulent
atmosphere was observed first by comparing equilibrium curves derived under various flow
rates. An equilibrium curve was developed to determine the optimum sample time, or the
minimum time for which the fiber has extracted the largest sample volume it is capable of
collecting [102]. The equilibrium curve was created by exposing a SPME fiber to a scenario of
SF in air for various, predetermined amounts of time, and then processing the samples through a
6
Shimadzu 2014 GC-ECD. The exposure times of the SPME fiber to the standard were then
65
|
Virginia Tech
|
compared to the resultant peak areas to create the equilibrium curve [67]. The impact of
turbulence on sample extraction when using a SPME fiber as the sampling mechanism was
observed by comparing equilibrium curves developed under varying conditions.
Three scenarios were created to derive three equilibrium curves for comparison: Scenario
1 had a Reynolds number of 4,596, Scenario 2 had a Reynolds number of 6,434 and Scenario 3
had a Reynolds number of 6,894. The scenarios were developed in the turbulence vessel with
steady flow rates of SF and air. The chromatographic results for scenarios 1, 2, and 3 are
6
displayed in Tables 5-1, 5-2, and 5-3, respectively. The flow rates of SF and air entering the
6
vessel were altered to induce varying Reynolds numbers while maintaining the same volumetric
concentration of SF . Table 5-4 displays the flow rates of SF and air entering the system, along
6 6
with resultant Reynolds numbers to achieve a volumetric concentration range of 5 through19
ppm of SF in air. Table 5-4 also includes a static scenario developed in earlier works [96].
6
Each equilibrium curve contains fifteen points because the employed scenario was sampled three
times for each sampling time. The average value was plotted. The scenario was sampled three
times to allow for the calculation of a standard deviation, and, in turn, a percent relative standard
deviation (%RSD) to indicate the precision of the results. The %RSD can be calculated as
shown Equation 5.11, and %RSD values less than 10% indicated precise results. The resultant
chromatograms were interpreted using GC Solutions software (Shimadzu). All samples were
taken from the turbulent air stream via the sampling port by the SPME fiber. A function of best
fit was generated for each equilibrium curve, and the function which best fit the data was
determined to be a third-order polynomial. An overlay of the calibration curves at the
aforementioned flow rates is displayed in Figure 5-2.
66
|
Virginia Tech
|
5.00E+06
4.50E+06
4.00E+06
a e 3.50E+06
r
A
k
3.00E+06
a
e P 2.50E+06 Scenario 1
e
g a 2.00E+06 Scenario 2
r
e
v A 1.50E+06 Scenario 3
1.00E+06
5.00E+05
0.00E+00
0 10 20 30 40 50 60
Time of Exposure (s)
Figure 5-2: Overlay of Equilibrium Curves from Varying Scenarios
5.5.2 Simultaneous Measurement of Multiple Analytes
The ability of the SPME fiber to simultaneously sample multiple analytes was observed
following the evaluation of the feasible application of SPME fibers to sample extraction in
turbulent conditions. A 100 µm PDMS SPME fiber was used to sample the standard directly
from the glass sampling bulb under static conditions. The SPME fiber delivered the extracted
sample directly to the GC-ECD for analysis. The standard was analyzed three times to allow for
the calculation of %RSD, as previously discussed. The parameters of the GC method employed
are discussed in Table 5-5. The GC-ECD method was altered from that employed during
turbulence testing to allow for PMCH to elute from the GC system. PMCH is a heavier tracer
gas with a molecular weight of 350 g/mol [45], as opposed 146 g/mol for SF [21]. As such,
6
PMCH requires more time to elute from the system than does SF [7]. Pure PMCH contains
6
contaminants, including other perfluorocarbon tracers, which must be cleaned from the system,
requiring a higher column temperature.
70
|
Virginia Tech
|
Table 5-5: GC Operating Parameters for Simultaneous Analysis of SF and PMCH with SPME
6
Split Injector Temperature 270⁰C
Sampling Time 0.1 min
Split Ratio 30:1
Total Flow 29.2 mL/min
Column Flow 0.91 mL/min
Linear Velocity 24 cm/s
Septum Purge Flow 1 mL/min
Detector Temperature 200⁰C
Initial Column Temperature 35⁰C for 0.05 min
Temperature Ramp Rate I 100⁰C/min
Hold Column Temperature 80⁰ for 4 min
Temperature Ramp Rate II 100⁰C/min
Final Column Temperature 180⁰C for 10.5 min
Total Program Time 16 min
Resultant chromatograms indicate that the 100 µm PDMS SPME fiber is indeed able to
simultaneously sample multiple tracers under static conditions. The GC Solutions software
integrated the peak areas of both SF and PMCH such that the resultant %RSD values were
6
below 10%, indicative of high precision. The overlay of the resultant chromatograms when
sampling multiple tracer gases simultaneously is displayed in Figure 5-4. Tables 5-6 and 5-7
display the numerical results for SF and PMCH, respectively.
6
uV(x1,000,000)
1.75Chromatogram
SF
6
1.50
1.25
1.00
0.75
PMCH
0.50
0.25
0.00
1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0 min
Figure 5-3: Overlay of Resultant Chromatograms when Sampling Multiple Tracer Gases
71
|
Virginia Tech
|
Table 5-6: Numerical Results for SF when Sampling Multiple Tracer Gases Simultaneously
6
Expected Retention Average
Sample Peak
Concentration Time Peak Std Dev %RSD
Name Area
(ppm) (min) Area
090912_18 2.935 5.86E+06
090912_19 55 2.931 5.85E+06 5.93E+06 1.28E+05 2.17
090912_20 2.931 6.08E+06
Table 5-7: Numerical Results for PMCH when Sampling Multiple Tracer Gases Simultaneously
Expected Retention Average
Sample Peak
Concentration Time Peak Std Dev %RSD
Name Area
(ppm) (min) Area
090912_18 11.651 1.60E+06
090912_19 55 11.643 1.58E+06 1.60E+06 1.76E+04 1.097
090912_20 11.628 1.62E+06
5.6 Discussion
Concerning the ability of the SPME fiber to be applied to sample extraction in turbulent
conditions, the chromatographic results displayed in Table 5-1 indicate high confidence in the
precision of the results obtained based on consistently low %RSD values. Figure 5-2 illustrates
the differences in equilibrium times between the three scenarios; as expected, the most turbulent
condition (i.e., Scenario 3) reaches equilibrium the fastest. This result is even more obvious
from Figure 5-4, which directly shows the relationship between Reynolds numbers and the
resultant equilibrium time. In addition to the three scenarios discussed, Figure 5-3 contains a
point representing the equilibrium time under static conditions where the Reynolds number is
zero as obtained in previous work [96]. A linear trend line fits the data well with an R2 value of
0.9 and clearly enforces the theory that kinetic interactions of the matrix with the SPME fiber
reduces the necessary time for concentration equilibrium to occur between the extracting phase
of the SPME fiber and the sample matrix.
72
|
Virginia Tech
|
50
45
40
)
s ( 35
e
m
30
iT
m 25
u
ir
b 20
iliu
15
R² = 0.9005
q
E
10
5
0
0 1000 2000 3000 4000 5000 6000 7000 8000
Reynolds Number
Figure 5-4: Reynolds Number versus Equilibrium Time
The displayed results in Figure 5-3 indicate a high confidence in the ability of a 100 µm
PDMS SPME fiber to simultaneously sample multiple tracer gases under static conditions. This
confidence is derived from the presence of reproducible results as indicated by the low %RSD
values. The test also indicates, however, that PMCH has a weaker response to the selected
SPME fiber than does SF . This conclusion can be drawn because the SPME fiber was exposed
6
to the same concentration of SF as PMCH, but the resultant peaks are significantly larger for
6
SF than for PMCH. The peak shape, however, is sharp and ideal for both tracer gases with
6
minimal tailing present. Additionally, the peaks of the individual tracer gases are well resolved
from one another as a result of the altered GC-ECD method. All factors considered, the selected
SPME fiber is capable of simultaneously sampling multiple tracer gases under static conditions.
Further work will include the simultaneous sampling of multiple tracer gases under turbulent
conditions as well as sampling of tracer gases in an underground mine.
73
|
Virginia Tech
|
5.7 Conclusions/Future Work
In this study a range of volumetric concentrations were observed for the impact of kinetic
interactions on equilibrium time via scenarios with altering Reynolds numbers. The results of
this paper conclusively prove that increasing turbulence within a sampling matrix reduces the
necessary time to achieve equilibrium between the sample matrix and the SPME fiber. These
results are encouraging for applications of SPME fibers as a rapid sampling mechanism in a
variety of industries because sampling procedures with SPME are not only easily employed and
robust for the most unskilled user, but they are also very fast. Future work necessary to further
understand the impact of kinetic interactions on sampling with SPME is to observe for changes
in the capacity of the fiber for varying Reynolds numbers.
The second portion of this research studied the impact of sampling multiple trace analytes
with SPME. The trace analytes observed were SF and PMCH in the vapor state. When
6
prepared in a static standard with equal parts SF and PMCH the SPME fiber was able to collect
6
both analytes. Having a standard of the same volumetric concentration of SF and PMCH in
6
nitrogen, the response of the SPME fiber to both trace analytes was expected to be equal.
However, the SPME fiber showed a greater affinity for absorbing SF as compared to PMCH.
6
While the response of SPME to PMCH was only a portion of the response of SPME to SF , the
6
peaks produced for both analytes were ideal for chromatography indicating a sufficient ability of
SPME to simultaneously sample multiple tracers. Future work should be done to observe the
ability of SPME to simultaneously sample multiple tracers in turbulent conditions, as was done
with SF in the first portion of this paper.
6
To gain a more comprehensive understanding of the ability of SPME to be applied in on-
site situations, the impact of kinetic interactions on fiber capacity must be observed.
74
|
Virginia Tech
|
6 Conclusions
Tracer gas surveys are a powerful means of assessing air quantity in underground mine
ventilation circuits. In many instances, tracer gas surveys are the only means of acquiring
information about a ventilation circuit, but their limitations due to time and training requirements
are daunting. Enhanced tracer gas techniques will significantly improve the flexibility of
ventilation surveys. The most powerful means to improve tracer gas techniques applied to mine
ventilation surveys is to alter existing protocols into a method that can be readily applied where
tracer surveys already take place. The research objectives defined for this thesis were designed
to facilitate improvements to existing tracer gas ventilation survey techniques.
One effective method of enhancing existing tracer gas survey protocols is to simply add a
second tracer gas that can be detected on a gas chromatograph – electron capture detector (GC-
ECD) using the same method as with the existing industry standard tracer, sulfur hexafluoride
(SF ). Radioactive gas and Freon gas tracers have been successfully used in the past, but a novel
6
tracer that may be used in active workings with a simple analysis method is desirable. Novel
tracer gases that have been successfully implemented in the past called for complex analysis
methods requiring special equipment, or were designed for inactive workings. Compounds from
two groups were considered to develop a methodology for a novel tracer gas to be implemented
in tandem with SF for the purposes of this study: Freon gases and perfluorocarbon compounds.
6
Compounds from these groups were selected due to their known sensitive responses to GC-ECD
systems and their non-toxic nature.
Experimentation with Freon gases lead to unsatisfactory chromatographic results due to
differences in sensitivity between the tested gases and SF , and poor separation between peaks.
6
On the contrary, experimentation with perfluoromethylcyclohexane (PMCH) and SF allowed
6
for peak separation and the development of Gaussian shaped peaks. PMCH is a favorable
76
|
Virginia Tech
|
selection for a novel tracer to work in tandem with SF due to its chemical stability, similar
6
physical properties and detection limits to SF , and its ability to be applied and integrated into an
6
existing system. Additionally, PMCH has been successfully utilized in other large-scale tracer
gas studies. A method using a Shimadzu GC-2014 with ECD and HP-AL/S column was
developed to simultaneously detect SF and PMCH. Future work with PMCH will require the
6
development of a release method for the tracer compound due to its existence as a liquid at
normal temperatures and pressures. The most promising means of releasing PMCH as a tracer
gas is via a source containing liquid PMCH capped with a fluoroelastomer plug which will
steadily release the tracer gas over time. The selection of PMCH as a novel tracer gas to be used
in tandem with SF satisfies research objective one of this thesis which was designated to
6
improve existing methodologies for tracer gas surveys by making them more flexible and to
reduce the time required to execute a comprehensive survey.
Introduction of a novel tracer gas will make great strides in improving the versatility of
underground tracer gas ventilation surveys, but further improvement to the tracer gas technique
can be made in simplifying individual steps. One such step which would benefit from
improvement is in sampling. SPME has inherent characteristics making it a desirable option for
sampling in the mining industry. Primarily, SPME is a precise method in the hands of a variety
of users for rapid sampling of analytes present in low concentrations, and is highly portable in a
rugged environment. The greatest impact that will result from implementing SPME as a
sampling device is the ease of use of the mechanism and the requirement of minimal training to
obtain precise results.
When applying SPME to sampling during underground mine ventilation surveys,
sensitivity will be increased by using a SPME fiber with a PDMS bonding phase. The thickest
77
|
Virginia Tech
|
coating of 100 µm of extracting phase is the optimum thickness to allow for sufficient recovery
of the non-polar analytes without being overloaded with oxygen from air. Recovery of sample
during analysis with a GC-ECD system can be optimized with the use of a splitless method with
a short sampling time, an initially cool column with a rapid ramp of temperature, and a moderate
linear velocity of the carrier gas. The identification of SPME as a robust sampling mechanism
for tracer gas surveys satisfies research objective two for this thesis, which was designated to
investigate sampling methods which will simplify tracer gas surveys on-site that can be easily
integrated into existing protocols.
When using SPME for quantification of analytes, two items are necessary to properly
analyze samples: an equilibrium curve and a calibration curve. The equilibrium curve indicates
the time period where sampling can be optimized via time of exposure of the fiber to the sample
and provide the most reproducible results. Once the fiber has been exposed for the determined
equilibrium time, the capacity of the fiber will be filled so that overexposure will not impact
analytical results. The calibration curve is necessary to allow for quantification of analytes when
using a GC system for analysis. A calibration curve may be developed with high confidence
when using SPME due to the precision SPME allows for, as indicated by the high R2 value
achieved during experimentation.
To further understand sampling with SPME, a range of volumetric concentrations were
created in three scenarios to observe the impact of kinetic interactions on equilibrium time via
scenarios with altering Reynolds numbers. The results of this study conclusively proved that
increasing turbulence within a sample matrix reduces the necessary time to achieve equilibrium
between the sample matrix and the extracting phase of the SPME fiber. These results are
encouraging for applications of SPME fibers as a sampling mechanism in a variety of industries
78
|
Virginia Tech
|
because sampling procedures with SPME are not only easily employed and robust for the most
unskilled user, but they are also very fast.
The impact of sampling multiple trace analytes with SPME was also observed. The trace
analytes observed are SF and PMCH in vapor state. When prepared in a static standard with
6
equal parts SF and PMCH the SPME fiber was able to collect both analytes. Having a standard
6
of the same volumetric concentration of SF and PMCH in nitrogen, the response of the SPME
6
fiber to both trace analytes was expected to be equal. However, the SPME fiber showed a
greater affinity for SF as compared to PMCH. While the response of SPME to PMCH was only
6
a portion of the response of SPME to SF , the peaks produced for both analytes were ideal for
6
chromatography indicating a sufficient ability of SPME to simultaneously sample multiple
tracers.
Future work to further improve the tracer gas technique of conducting mine ventilation
surveys includes developing a reproducible means of releasing PMCH at a sufficient rate to
achieve similar concentrations as SF during surveys. Additionally, the GC-ECD method could
6
be altered to accommodate other compounds from the perfluorocarbon group in addition to SF
6
and PMCH in one simple method. Additional tracer compounds can only increase the flexibility
of ventilation surveys. Future work with SPME calls for determining the impact on fiber
capacity due to kinetic interactions between the extracting phase of the fiber and the sample
matrix. Furthermore, the impact of sampling multiple compounds under turbulent conditions
with SPME should be observed. Future work should contribute to the simplification of tracer gas
surveys and practicality of executing ventilation surveys in this manner in underground mines.
79
|
Virginia Tech
|
1. Introduction
1.1 Energy Issues
The population of the world is expected to increase between 50% and 100% in the
next 50 years. Increased food, energy, and materials production is vital just to maintain
the current quality of life and standard of living. Mineral resource development is
necessary to meet all of these needs. Furthermore, energy cannot be supplied without
mineral production, where such minerals can be used as fuel, for plant construction, in
electric wiring, and so forth. Although reduced consumption, re-use, or recycling, and
energy savings may reduce the growth rate in minerals use, these will not be sufficient to
maintain consumption at current levels. [Karmis et al., 2000]. Figure 1.1 shows the fuel
distribution for electricity generation in the United States in 1998. With 56.27% of the
electricity generated directly from coal, coal is the majority fuel by over 30%. Electricity
needs will be increasing over the coming years. Figure 1.2 shows the amount of energy
each fuel source will provide projected to 2020. Renewable resources are expected to
generally remain at a constant energy production. Nuclear power is expected to
contribute less as existing plants are retired and because no new plants are being built.
Petroleum prices are expected to rise, which will lead to a decline in its use for electricity.
Natural gas has been considered the fuel of choice for the utility industry because of the
speed of site construction, throtability of gas turbines, and environmental considerations.
However, the supply of natural gas to meet the demand shown in Figure 1.2 may not exist
or be attainable [Miller, 2001]. The Energy Information Agency forecasts that coal will
be the foremost fuel for future energy production, as shown in Figure 1.3.
1
|
Virginia Tech
|
Petroleum
0.39%
Natural Gas
32.56%
Coal
47.15%
Renewable
Sources
8.12% Nuclear Power
11.78%
Figure 1.3--Projected 2020 U.S. Energy Production by Fuel [EIA, 2001]
1.2 Coal and Energy
One of the main reasons that coal has remained, and is expected to remain, a
dominant fuel for electricity production is because of the low cost per BTU. Figure 1.4
shows that the price of coal has been on a steady decline. It has remained possible to
economically mine coal because of advances in technology that have increased worker
productivity [Holman, 1999]. This has allowed for more production with fewer workers
at a lower selling price. Figure 1.5 is a graph of productivity per mine worker in Virginia
since 1990. This graph shows that there has been an overall steady increase in
productivity, and at the same time a single Virginia worker in 1998 produced 33% more
tons per hour than a worker in 1990. Furthermore, this trend is not limited to Virginia,
but it is indicative of the coal mining industry. Technological advances have allowed
productivity to increase despite the fact that mining conditions have degraded over this
same period [Milici, 2001]. The increases in coal mining technology must be continued
in order to keep coal as a dominant fuel for electricity generation.
3
|
Virginia Tech
|
$50
$45
$40
$35
$30
$25
$20
$15
$10
$5
$0
0 1 2 3 4 5 6 7 8
9 9 9 9 9 9 9 9 9
9 9 9 9 9 9 9 9 9
1 1 1 1 1 1 1 1 1
Year
4
noT
trohS
rep
sralloD
laeR
2991
egarevA
Electric Utility
Other Industrial Plants
Figure 1.4--Average Delivered Coal Price [VCCER, 2001]
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
0 1 2 3 4 5 6 7 8 9
9 9 9 9 9 9 9 9 9 9
9 9 9 9 9 9 9 9 9 9
1 1 1 1 1 1 1 1 1 1
Year
ruoH
rekroW
rep
snoT
Average Productivity
Surface Mine
Productivity
Underground Operations
Productivity
Figure 1.5--Coal Worker Productivity in Virginia [VCCER, 2001]
Underground coal is mined using two different types of mining techniques,
longwall mining and room-and-pillar mining. Room-and-pillar mining is the process of
creating voids in the coal while leaving pillars to support the roof. This will be discussed
in detail in the section 1.2.1. Longwall mining is the process of developing very large
blocks of coal (e.g., 600 feet by 2000 feet) and using a longwall to extract the entire
|
Virginia Tech
|
block. Longwalls have a very high extraction ratio, leaving only small blocks of coal in
place. In order to develop the longwall block, room-and-pillar mining is used to create
entries around it. Although longwall mining is not the emphasis of this research and will
not be discussed further, longwall development sections are treated as a room-and-pillar
system and will be further analyzed.
1.2.1 Elements of the Room-and-Pillar Mining Systems
Room-and-pillar coal mine systems are the traditional manner of underground
coal mining. A typical layout is shown in Figure 1.6 with call-outs for definitions of
major components. Pillars are sized based on the amount of overburden over the
extraction area and on the material properties. Entries and cross cuts are created by
mining a cut into a room and are normally around 20 feet wide, while the mining
equipment used determines cut depth. Fresh air is blown to the working face from the
outside. The air is directed and controlled by stoppings, box checks, and check curtains.
Stoppings are permanent walls that create an airlock between entries (which are
numbered, as seen in Figure 1.6). Box checks are used to keep the air in the belt entry
separate. Check curtains are temporary airlocks that equipment can tram through from
the working room to the feeder. A working face is referred to as a section.
5
|
Virginia Tech
|
What is referred to as “traditional” room-and-pillar coal mining is a practice of
mining coal without the benefit of modern machinery. The practice involves cutting the
kerf, drilling blast holes, loading blast holes, exploding the working face and loading the
coal, then roof bolting the cut. Cutting the kerf is done using a kerf cutter, a machine
similar to a horizontal chainsaw. The kerf is a free face at the bottom of the working
face. Using a jackleg drill, blast holes are drilled in the face then loaded with explosives.
The explosives are detonated, which breaks the face into lumps. The lumps are loaded by
hand into a hauler and are taken to a dump point. Many of the existing simulators still
include this process in their simulation capabilities.
The continuous miner is the standard equipment for modern coal mining in room-
and-pillar systems. The continuous miner has a large drum that has cutting bits to tear
through the coal and rock (Figure 1.7). The mined coal and rock is collected by
collection arms and placed on a panzer conveyor that runs through the center of the
miner’s body. The coal and rock move to the tail boom of the miner, which is articulated
to aid in loading the haulage system. Some models of continuous miners have roof
bolting capabilities. While waiting for the haulage system, the miner can be bolting the
roof. The miner has taken over the activities of the kerf cutter and the blasting.
Roof bolting is critical to the safety of an underground mine. A roof bolter is a
piece of equipment with one or more drilling heads and a roof support system. An
example of a typical roof bolter is shown in Figure 1.8. The roof bolter enters a place
that has been mined and places roof bolts into the roof for support. A worker, who is
shielded by the roof support system on the bolter, normally operates the drill, although
there are some roof bolter models that are operated remotely.
7
|
Virginia Tech
|
Figure 1.7--Example of a Continuous Miner
Figure 1.8--Example of a Roof Bolter
There are several types of haulage systems, as well. Shuttle cars are self-
propelled conveyance systems that can carry and unload a load of coal and rock (Figure
1.9). Generally, shuttle cars can be loaded or unloaded from either end. This allows the
cars to avoid turning around in the close quarters underground. Shuttle cars can be
powered by the section’s main power supply by using trailing cables. For safety reasons,
these cables need to be manipulated as the miner moves to a new cut. The cables are
hung from roof bolts where possible to prevent them from contacting workers and
equipment. Shuttle cars can have a diesel-powered motor that has special scrubbers for
particulate matter and exhaust. There are also battery-powered shuttle cars, which can
generally only load and unload from a single end, because of the battery size.
Furthermore, these cars are usually articulated in the center to make it easier to turn
around easily. Recent developments in mining equipment include the continuous haulage
systems. These are systems of belts that can be moved in the section between the current
cut and the feeder.
8
|
Virginia Tech
|
Figure 1.9--Example of a Shuttle Car
The shuttle cars unload into a feeder breaker (commonly called simply a feeder).
Feeders can commonly be loaded from several shuttle cars, meaning they have more than
one hopper. The feeder has a breaking head or mechanism that prevents oversize rocks
or coal from being put on the main belt. The main belt removes the coal and rock from
the section. There is also a form of haulage that is used for men and equipment. This is
normally a rail system with carriages that can travel through the mine.
1.3 Need for Development of a Room-and-Pillar Continuous
Mining Simulator
Many types of tools have been developed to aid mine managers and engineers in
designing and maintaining mine plans. A vast majority of these tools were developed for
a single mining scenario and are difficult to adapt to other systems. The difficulty in
adapting the tools varies by the type of tool that was used. These tools have been
designed using either a simulation language or a traditional programming language. The
tools developed using a simulation language are typically difficult to adapt to new
equipment travel paths or mine layouts. Tools developed using traditional programming
languages (e.g., Fortran, Basic, or C) are typically difficult to use in different equipment
9
|
Virginia Tech
|
configurations. In addition, these tools are usually based on traditional operations
research simulation and are limited by their implementation. With the new computer
technologies available, these implementation limitations can be overcome. One such tool
is the continuous mining simulator, which, up to now, has not been implemented using
the latest computer technologies. The purpose of this research effort was to fill that gap
and develop a new simulator with the following features:
• Utilize standardized modern computer technology
o Object-oriented programming
o Client/server application architecture
o Web-based application
• Reflect current and future mining practices
The tool described in this thesis, WebConSim, has been developed to conform to
these objectives. WebConSim was developed with three major subcomponents: the Web-
based front end, the simulation engine, and the database (Figure 1.10). The Web-based
front end is dynamically created Web content that is served by Microsoft Internet
Information Server, version 4 or later. This system is a client server system because the
client computers request information from the web server, which processes the request.
The programming language for the dynamic content is Hypertext Markup Language
(HTML), Visual Basic for Applications (VBA), and JavaScript (JS). The simulation
engine has two subcomponents: the report writer and the simulator. Both are written
entirely in Visual Basic. The simulator is an ActiveX Dynamic Link Library (DLL); the
report writer is an executable application. The technologies and connection types are
outlined in Figure 1.11. The simulator was initially implemented using a Microsoft
Access database, but it can easily be adapted to use any relational database. The database
stores information that is used by both the simulation engine and the Web-based front
end.
10
|
Virginia Tech
|
2. Modeling a Physical Process
Most modern industrial and nonindustrial operations are composed of sequences
of physical processes, such as making a product, scheduling trains, electronic
transactions, and so forth. In an effort to optimize these physical processes, “what-if”
analyses or goal-seeking procedures need to be applied to each physical process. This
approach has considerable advantages, because experimenting on a model is cheaper, less
dangerous, and often can do operations impossible or impractical in the real world
[Arsham, 2000]. However, in order to ensure that results from the virtual models will
represent the behavior of the physical models, the virtual process should replicate all
behavioral aspects of the physical process.
For example, training pilots in a flight simulator is significantly cheaper and safer
than flying a real plane, since the simulator will not endanger the pilot and others. Even
when considering the expense of developing such simulators, building the virtual models
is still cheaper than the crashes that would likely take place if training in real commercial
or military jets. A flight simulator is a model that normally has two components: the
physical component and the digital component. The physical component is the mock-up
of a real cockpit, and the digital component is the computer that controls the simulation.
Simulations can be done with a completely physical model, as in an architect’s
scaled model. Electronic component manufacturers use digital models to test and debug
circuits prior to manufacture [Arsham, 2000]. Most models in the mining industry are
digital computer models. Such models can function either as a simulation system or an
expert system.
There are many different simulation types, such as discrete event, continuous,
hybrid, and so forth. A simulation may incorporate Monte Carlo techniques, Markov
chains, and/or apply to queue systems.
• Discrete event simulations process discrete events that occur at random times
through a central processing unit. In discrete event systems many events can
occur simultaneously [Schriber, 1997]. An example of a discrete event
12
|
Virginia Tech
|
process is that of cars approaching an intersection. Cars arrival times are
random events and each arrival time is a discrete event.
• Continuous simulations comprise variables that act as a function of time. An
example of an application of a continuous simulation is a continuously
changing system like a car’s suspension [Gordon, 1969].
• There are also combinations of discrete and continuous simulations called
combined simulations. Hybrid simulations are analytical subsystems in
discrete models.
• Monte Carlo simulations use stochastic processes to describe nonprobabilistic
problems. In other words, a Monte Carlo system uses a random event
generator, such as an electronic die, to create the random variables or events
needed for the simulation
• Queuing systems refer to specific groups of physical processes where serving
and requesting mechanisms are in place. If the serving units cannot serve the
requesting items, then the requests are placed in a queue until the servicing
mechanisms are available.
• A Markov process looks at a sequence of events and calculates the probability
of one event following another. A string of such events is then called a
Markov chain.
In almost every application that requires a model of a physical process, there is a
statistical component that must be considered. For any single process, there is a
statistical uncertainty associated with it. The type of uncertainty that needs to be
described in the model depends on the desired output from the model. For instance, a
robot arm loading boxes onto a conveyor belt will take an average amount of time to
place a new box onto the belt. If the model’s purpose is to simulate the operation and
loading of the belt, then the amount of time for the arm to load a box should be studied to
find the time distribution. However, if the model is of the robot arm itself, then the rates
of each movement should be studied. The results from these studies can be shown to be
consistent with existing probability functions [Tocher, 1963]. A process that has no
uncertainty can be handled deterministically; in other words, there is a 100% probability
that the next value will be the same as the previous. A probability function is a function
13
|
Virginia Tech
|
1
0.75
0.5
0.25
0
-1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1
Value
15
ytilibaborP
Figure 2.2--Normal Continuous Distribution
For simulations to calculate a value for a random variable, the simulator must
create a random number. Computers generate random numbers in a uniform fashion.
This means that the probability of a random number being any number in the given range
is the same. This distribution is discussed below. Because computers calculate random
number in this manner, the probability must be translated into the distribution that is
being used to model the process. This is done using the cumulative distribution function
that is the sum of the probabilities to that value. Figure 2.3 shows the cumulative
distribution function for the probability function in Figure 2.1. Cumulative distribution
functions always range from zero to one. The random number is taken as the cumulative
probability, and the value is read from the range. Figure 2.4 shows the cumulative
distribution for the example shown in Figure 2.2. For the purposes of simulating physical
processes, the uniform, normal, exponential, and Poisson distributions are commonly
used [Hamburg, 1987].
|
Virginia Tech
|
2.1.1 Uniform Distribution
The uniform distribution can also be referred to as the rectangle distribution. The
distribution is used to model really random events. This is because the probability of any
value is exactly the same as any other value. It is the simplest of all the distributions used
in simulations [Hamburg, 1987]. Equation (1) shows the function for calculating the
probability density function for the uniform distribution (see Figure 2.5).
f(x) = 1 (1)
Upper Bound - Lower Bound
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
Value
17
ytilibaborP
Figure 2.5--Probability Density Function of the Uniform Distribution
2.1.2 Normal Distribution
The normal distribution is central to statistical theory and practice. Graphically,
the normal curve appears as a bell and can be defined based on two parameters: the mean
value and the standard deviation, as shown in equation (2), where σ is the standard
deviation, µ is the mean, x is the value of interest. Figure 2.2 shows the complete normal
distribution function. The normal distribution is applicable to most types of random
variables that describe the majority of physical processes. A characteristic of this
distribution is that two-thirds of the area under the normal distribution curve lies within
one standard deviation from the mean. Additionally, 99% of the area is within three
standard deviations. Hence, the normal distribution describes well processes that do not
|
Virginia Tech
|
vary widely. Many statistical calculations are based on the normal distribution
[Hamburg, 1987]. In mining applications, the normal distribution is used for travel times.
It is particularly good for travel times of vehicles that have a governor and their top speed
is limited [Sturgul, 2000].
1
−(
1
) ( x−µ) 2
f(x) = e 2 σ (2)
2πσ
2.1.3 Exponential Distribution
The exponential distribution is a nonsymmetric distribution that can be defined
based on only one parameter. Equation (3) describes the exponential distribution, where
λ is the mean and x is the value of interest. The distribution can be used effectively to
model interarrival times, where λ is the average arrival time. This should not be
confused with the Poisson distribution’s characteristics of calculating the number of
arrivals. The exponential distribution is used for calculating the rate between the arrivals.
It is also effective for processes that normally take very little time, but can take a very
long period of time. Figure 2.6 shows a graph of the exponential distribution with an
average of 0.5. The probability is very high around the 0.5 value, but it is not improbable
for the value to be close to 4. The cumulative distribution is shown in Figure 2.7
[Hamburg, 1987].
f(x) = λe−λx (3)
18
|
Virginia Tech
|
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 2 4 6 8 10
Value
21
ytilibaborP
evitalumuC
Figure 2.9--Cumulative Poisson Distribution Function
2.2 Expert Systems
Expert systems are models that use heuristic rules to find an algorithmic solution
to a problem. Expert systems models are supposed to simulate the logic that a human
expert would use to solve a given problem. Although, these systems do not necessarily
provide an optimal solution, they have the ability to communicate with sensors,
databases, and so on to draw information. Furthermore, as the heuristic rules and
knowledge processor are improved with experience and improved technology, the
solution becomes closer to optimal. Unlike human experts, expert systems will return
results that are consistent under any given circumstances. Expert systems can act as a
human expert replacement by automating routine tasks that would otherwise require an
expert. Assisting experts in difficult tasks or reminding the expert of important
information is another use of expert systems. Expert systems can also be used for
forecasting or inferring information that may be too complex or tedious for a human
expert to calculate.
Heuristics are rules that an expert system can apply to reach a solution based on
human experience and expertise. These rules should always provide a satisfactory
solution, even though it is not necessarily the optimal solution. Expert systems capitalize
on human expertise and are a way of institutionalizing the knowledge of experts. This
|
Virginia Tech
|
expertise in an expert system will be available on a wider basis, without interruption,
completely consistent, and for autonomous applications. These systems can solve
problems that do not have tractable solutions because they are based on rules and not on
algorithms. The disadvantages of expert systems are the costs of implementation and
maintenance. The maintenance of an expert system can create undesired results if new
rules conflict with the existing rules. The rules are organized and processed by an
inferencing engine. The style of inferencing engine depends on the type of expert
system.
There are two tasks involved in creating an expert system: analysis and synthesis.
Analysis is the classification of complex and extensive information and is limited in the
number of possible outcomes. Synthesis is the building of a complex structure that has a
large variety of possible outcomes (e.g., scheduling, design, planning). Suitable
problems for expert systems must have certain characteristics. The problems must have a
relevant body of knowledge, the expertise must be cognitive, and the expertise must exist.
These problems must be of a manageable size with a well-defined focus. The experts that
supply the knowledge must be able to articulate it and teach their skill. Finally, the
knowledge must be fairly stable over the design life of the project [Durkin, 1994].
While in flight, airplanes depend on expert systems to control the engines and
wing configurations to stay in flight safely. These controls are based on the rules of
physics and an internal balancing act called load balancing. Every flight has a load plan
that instructs the ground crew on loading the freight in respect to the plane’s center of
gravity. This plan is critical to the flight’s safety and efficiency. Major airlines involve
more than 100 ground personnel and complex computer applications to load a single
flight. With over 2300 flights per major airline per day, this is a serious challenge.
American Airlines created an expert system called American’s Assistant Load Planner
(AALP), which takes into account historical information about each flight and predicts
values for future flights, including number of passengers and baggage and mail weights.
The takeoff runway, flap settings, and takeoff temperature are all used by AALP to create
a suggested load plan. AALP has helped American Airlines to ensure safe and efficient
flights while improving the productivity of load planning staff. This has reduced costs
by cutting work hours and increasing fuel efficiency [Daily, 1992].
22
|
Virginia Tech
|
Several different methods of creating expert systems are described below. Each
method uses an approach specific to the problem it was designed to solve. Some
methods, like artificial intelligence, do not have a single methodology. However,
forward and backward-chaining systems are very structured. Systems such as neural
networks, genetic algorithms, and frame-based systems try to recreate the real world
digitally.
2.2.1 Artificial Intelligence
Creating an intelligent machine is a goal that has been researched since the late
1700s. At that time a group toured Europe and America with a “Chess Automation”
device. This device was advertised to be able to play chess but was, in fact, a ruse. It
was not until May 1992 that a computer finally proved it was better at chess than humans
[Chandrasekaran, 1992]. Artificial intelligence (AI) studies human activities and
attempts to create reasoning machines that can perform a task with an outcome similar to
that of a human. Part of the difficulty of developing AI applications is the elusive
definition of intelligence [Boden, 1995]. Competence, logic, and knowledge are normal
measures of intelligence; all are extremely difficult to recreate. Much of the development
work in AI has been done to in the area of game playing. In 1971 Sir James Lighthill of
Cambridge University reviewed the accomplishments of AI since the 1950s. The
Lighthill report stated that the hype of AI far outreached the accomplishments. This
report highly damaged AI research. Many researchers did continue to work on AI,
specifically trying to keep the hardware requirements down [Boden, 1995]. The main
emphasis was on search techniques to find the ultimate solution to the number of options
that can be developed using brute force. DENDAL, a program developed for NASA at
Stanford University was one of the pioneering programs in AI. DENDAL showed
researchers that knowledge, not reasoning, is the true driver in intelligence. This
revelation led to the development of knowledge-based systems, or expert systems
[Durkin, 1994].
The most famous AI application is IBM’s Deep Blue, a 1.4-ton supercomputer.
When Deep Blue defeated the master chess player Gary Kasparov in 1997, it was not the
first time a computer showed superiority over humans in certain endeavors [Halfhill,
1997]. The 1997 match was a rematch from the 1996 match, which Kasparov won.
23
|
Virginia Tech
|
Deep Blue was developed by the world’s greatest chess players, programmers, and
hardware engineers. Individually, these players would not be able to defeat Kasparov,
but, pooling their talents into Deep Blue, they had a chance. Deep Blue’s major
advantage over Kasparov was its ability to consider around 36 billion chess moves in
three minutes. Kasparov was able to consider only about three moves per second. It
would have taken Kasparov about 380 years to consider the same number of moves Deep
Blue could in three minutes. This raised the question of whether Deep Blue was really an
AI or just a very fast move calculator. Deep Blue’s AI comes in on how it selects the
move from the billions that had been calculated [Halfhill, 1997].
Still in the area of game-playing, Jonathan Schaeffer developed a program named
Chinook to play checkers. He worked with checkers champion Norman Treloar for 2 ½
years to develop the game strategy. Chinook searches 25 moves deep in a checkers
game, meaning that before the game begins an end game is calculated. Marion Tinsley is
believed to be the greatest checkers player of all time. Prior to playing Chinook, Tinsley
had lost only four games in his 40-year career. In 1992 Chinook beat Tinsley twice in a
40-game match before the machine froze. During the rematch in 1994, Chinook and
Tinsley tied six games in a row. Tinsley forfeited the match and died a month later.
Since then, Chinook remains undefeated. Goren-in-a-Box is an expert bridge-playing
game that is very successful. Maven by Brian Sheppard is a world champion Scrabble
player. The Hasbro Scrabble computer game’s AI is based on the Maven program.
Maven, in fact, changed the way that Scrabble is played [Hedberg, 1997].
In a constantly changing mining environment, there are many variables. To spot a
problem or potential problem in this environment can be very difficult because of the
amount of data that can be collected. An AI-based system can monitor that data and
create alerts that are of interest to mine managers. A system has been developed that
collects information in near real-time from mining equipment by the Generic Mineral
Technology Center of the U.S. Bureau of Mines. The system monitors all the
information from the equipment and pattern matches it to known problem patterns.
When it recognizes a problem, the program alerts the mine management. The system is
also in a constant learning cycle to reduce unnecessary alerts and recognize new types of
24
|
Virginia Tech
|
alerts. The implementation of the system has shown promising results [Lever et al.,
1991].
2.2.2 Backward Chaining
In many systems there are a given number of conclusions or well-established
goals that can possibly be drawn. A goal-directed expert system operates using backward
chaining to try to prove a single goal by processing many rules. It’s called backward
chaining because it starts with a solution and attempts to prove the solution. Rules in
backward chaining, called goal rules, have two parts, the premise and the goal. Goal
rules will only fire (i.e. be evaluated) if its premises are all met. A premise of a rule may
be the goal of another rule in the rule set. This requires the inferencing engine to
organize the rules into a hierarchy. The inferencing engine searches through the systems
rules recursively. The engine will attempt to find evidence to support as many rule
premises as possible, collecting lists of goals and subgoals. Eventually, the backward-
chaining engine will need to get more information, which causes the process to begin
again. All goals and subgoals are searched, and the engine will return a true or false to
the original goal.
Suppose that a system is developed to determine whether a door lock has broken.
This system would have a goal rule with the premise that if all the components of the
lock are functional then the lock is working (i.e., the goal is attained). The subgoals
include all of the subcomponents to the lock. For instance, the key is not worn down, the
tumblers are not worn, the springs are still working, the door works when not locked,
there is clearance for the bolt, and so forth are all subgoals. The inferencing engine will
work through all the rules asking the user questions appropriate for the application. If the
bolt operates while the door is unlocked, then there would be no need to make sure the
bolt has enough clearance. This is the manner in which the inferencing engine will trim
the rules to find the proper solution to the main goal.
Backward chaining is most effective for diagnosis problems because it tries to
prove a goal. Getting evidence through searching the subgoals proves the goal.
Backward-chaining systems can solve very complex problems when they are designed in
modular form [Durkin, 1994].
25
|
Virginia Tech
|
2.2.3 Forward Chaining
Forward-chaining systems work proactively with the system that is being
modeled. Rules in forward-chaining systems use a premise, as in the backward-chaining
system, and a conclusion. The conclusion is the inference that can be drawn from the
premise. Forward-chaining systems accept information and then scan the rules for a
premise that uses that information. If a rule is found that uses the piece of information,
then the rule is fired. The conclusion is checked against the rule premises to see whether
any other rules are to be fired. During this process, it is possible that more than one rule
will meet the premise and need to be fired. The system design should minimize this
occurrence so that only one rule is allowed to fire in any single rule search cycle.
Choosing the rule to be fired is based on a process called the recognize-resolve-act cycle.
The process of resolving conflicts between rules will vary by the inferencing engine used.
The least sophisticated manner of conflict resolution is to simply fire the first rule found.
Other systems assign a priority to the rules and fire the rule with the highest priority first.
Some systems use control-based rules that make certain that another rule had fired
previously [Durkin, 1994].
A simple example of a forward-chaining application would be a belt line. The
belt line is designed with three different sections and transfer stations between the
individual belts. If the belts have not been carrying a load for an hour, then they should
shut down. Once a load has been placed on an unloaded belt, then all the belts should
begin operating. In this case an expert system has been installed to control the belts. The
expert system monitors the load on each belt and the belts’ run states. The system will
make the changes to the states of the belts based on changes in the information from the
sensors.
A primarily forward-chaining system known as Pitch Expert has been
successfully implemented in kraft pulp mills. Pitch Expert is used to monitor and
diagnose the pitch deposition and pitch dirt at the mill site. The program uses 1200 rules,
3000 schemata, and about 200 functions to control all aspects of the mill that pertain to
the pitch. Pitch is a gluelike wood resin that is insoluble in water and can cause clogs and
deposits that harm the operation of the mill. Problems related to pitch control cost about
26
|
Virginia Tech
|
$80 million a year in Canada alone. Mills using Pitch Expert can expect to save $22.4
million a year in problems resulting not just from the pitch [Kowalski et al., 1993].
Forward chaining works by inferring information from what information is
initially asserted or available. The sequence of the rules firing (i.e., being evaluated) is
important for the overall effectiveness of the expert system. The rules have a mixture of
program control and heuristics. Forward-chaining expert systems have excellent
applications in monitoring processes online.
2.2.4 Neural Networks
Artificial neural networks (ANNs) are data analysis structures that attempt to
work in the same manner as biological neural networks. Biological brains are complex
systems consisting of billions of neurons interconnected by synapses (see Figure 2.10).
Dendrites marshal the synapses’ connections to a neuron from the axons of other neurons.
Human brains average about 100 operations per second. Figure 2.11 shows an ANN
[Neusciences, 2001]. The circles represent the artificial neurons that are connected by
artificial synapses. Each synapse has a weighted value attached to it to represent the
synaptic gap. Unlike biological neural networks, ANNs use floating-point numbers
instead of pulse trains. ANN neurons are input, output, or hidden. Input and output
neurons control the information going into the network and out of the network. The
hidden neurons can have any number of layers and be interconnected in a variety of
ways. The hidden neurons perform matrix translations on the incoming data and pass the
results to the output neurons.
27
|
Virginia Tech
|
network do not need to be known. Thus, instead of needing to have mathematical
functions and a clear understanding of how the system functions, accurate simulations or
interpretations can be made based only on the data. By using known inputs and output
from the training data, the internal hidden neurons and synapses can adjust their weights
and connections internally. These weights and connections are not known outside of the
neural network. The neural network then does all the modeling work for the modeler.
The fact that neural nets are trained, not programmed, is their main benefit.
Properly training a network is critical in order to get the desired results. There are
two styles of training for neural networks, supervised and unsupervised. Supervised
training uses examples of the input and output of the process to be modeled. An input
pattern is given a matching output pattern, and the network will iterate its internal
workings until it can reproduce the output pattern within the given threshold. The
network might have to run several test patterns before it attains the desired accuracy. In
unsupervised training, the operator gives the network a set of input data, which then seeks
statistical regularities within that data. Working from these regularities, the ANN can
develop modes of output to represent a given pattern of input.
Couillet and Prince used an ANN to predict firedamp emissions (i.e., CH ) in
4
French coal mines [Couillet and Prince, 1999]. The system uses the coal production
rates, gas concentration, airflow, nature of surrounding rocks, CH isotherms, degassing
4
procedures, and so on. The system will output the mean methane concentration at a point
in the near future. The model they developed shows the potential of the approach. The
system made satisfactory predictions, warranting further research and development.
2.2.5 Genetic Algorithms
Taking the metaphor of an expert system that works like the brain a step further,
some systems have added the idea of evolution [NewWave Intelligent Business Systems,
2001]. ANNs have the ability to learn the mechanics of a time series, or a series of
values that changes with time. A genetic algorithm is a collection of ANNs that have the
ability to change themselves over time as the input data changes. Each ANN is a member
of a population that has its own unique characteristics. These characteristics are treated
like the genome in nature. During each generation, there is a period of evolution during
which the fittest ANNs produce offspring. The offspring are still the same species as the
29
|
Virginia Tech
|
parents but are different in some way. The new generation incorporates an element of
random mutation [Kantrowitz, 2001]. The mutation is a stochastic process that is
random, but because of the fitness pairings the result is nonrandom (i.e., better than
random). The process of evolution involves five steps:
1. Evaluate and rank network population.
2. Select fittest ANNs.
3. Crossover (mating) and mutate.
4. Update network population.
5. Discard old population and repeat step 1 until finished.
Computer game enthusiasts are familiar from a user standpoint with genetic
algorithms. Genetic algorithms form the basis of many computer games’ computer mind.
One of the most striking examples is the commercial game Creatures. Creatures is a
game that allows a user to have one or more hamster like digital “pets.” The creatures are
autonomous, intelligent, and capable of learning, because they are controlled by a genetic
algorithm brain that can fully interact with several pseudo-biological substructures (e.g.,
digestive, immune, and reproductive systems). Users can interact with the creatures by
teaching them to talk, disciplining them, rewarding them, giving them toys, and giving
them food. The creatures can interact with other creatures and owners through the World
Wide Web. The “gene pool” for the game is ever increasing and diversifying, leading to
smarter and better-adapted creatures. Creatures is an example of how, on a home
computer, genetic algorithms that imitate a full biological design can yield lifelike results
[Grand, 1997].
A classification system for Portuguese granites that uses a genetic algorithm was
developed at Instituto Superior Técnico in Lisbon, Portugal. The traditional approach to
classification was a subjective visual inspection of the rock. To develop the system,
researchers developed a digital image analysis tool. This tool measures 117 different
granite characteristics that the genetic algorithm uses. The algorithm was trained by a lot
of 50 samples that contained 14 different types of granite. The algorithm showed that it
needed only three features in many cases. Further analysis of this fact may reduce the
processing time from its current 142 minutes. Also, newer genetic algorithms that may
increase the speed of processing images are being tested [Muge et al., 1999].
30
|
Virginia Tech
|
Genetic algorithms have a wide range of applications, and they are most effective
in multidimensional optimization. The process of genetic algorithms is a search process
that is based on the same processes found in nature. Genetic algorithms, like most
models, do not guarantee the perfect solution. However, the solutions, if applied to the
right problem, can meet acceptable accuracy without extensive time spent on model
development [NewWave Intelligent Business Systems, 2001].
2.2.6 Frame-Based Systems
Frames are a common way of representing information. A frame is an abstract
data type. Frames have a schema, or structure, that governs where they exist in hierarchy
to one another. Frames can contain properties and methods that describe the frame. A
frame that has information filled into it is called an instance of that frame. For example,
there can be a frame named “dog,” and this frame may have two properties: name and
breed. There may be an instance of that frame named “my dog” that has the name “Spot”
and the breed of “beagle.” The development of object-oriented programming practices
has helped foster new functionality in frame-based expert systems. Frames are a natural
way of representing real-life objects, while object-oriented programming was developed
for the same purpose. The two terms, object-oriented programming and frame-based
expert system, may be used interchangeably. Most frame-based systems are developed
using object-oriented programming languages.
Frame-based systems have an important functionality: the inheritance.
Inheritance is the process of a subframe taking on the characteristics of the parent frame.
The above example of the dog frame can be taken as a subframe to the canine frame. The
canine frame can describe the characteristics of the canine family, thus having only the
information that would make the canine unique as compared to, say, the feline family.
Both a feline frame and a canine frame can be taken as subframes of the mammal frame.
The mammal frame would describe all the properties that make mammals different from
reptiles. The mammal frame and the reptile frame could be subframes to the animal
frame, and so forth. Deciding on the frame schema depends entirely on the application.
In this case there would only be a reason to take the frame to the basic animal if
interspecies interactions were being described by the system. Inheritance is not limited to
a single frame. There is no limit on the number of parent frames a frame can have; if,
31
|
Virginia Tech
|
however it has more than one, then this is called “multiple inheritance.” For instance, the
dog frame described above could be the subframe to the canine frame and to the seeing-
eye dog frame. The dog frame would inherit everything from the canine frame and the
special characteristics that are shared by all seeing-eye dogs. Multiple inheritances are
important to describe objects that may exist in multiple worlds. There could be the need
to describe a dog instance as being a dog in one situation or as a seeing-eye dog in
another, while in the same simulation space.
Inheritance does not always work in the proposed manner. In the child frames,
there is the possibility of a drastic difference from the parent. In the canine frame, there
may be a property of number of legs. This should be set to four legs. However, there is
the possibility of having an instance of dog that only has three legs. Child frames can
overwrite inherited properties explicitly.
The properties of each frame may have additional information about that property.
These additional pieces of information are called facets. A facet can be an initial value, a
constraint, a type, or an action. Default-value facets just set the initial value for a
property based on the parent frame or the frame itself. In the case of the canine, the
number of legs default value is set to four. This same property may have a constraint
facet that says the number of legs must be less than five and greater than zero. Constraint
facets only keep the system from assigning a value that is not practical. A type facet
would keep the value from being written to a value that is not appropriate. It would not
be appropriate to set the value on the dog’s number of legs to a string value; depending
on the application it may not be appropriate to set it to a floating point number either.
Action facets are normally processed when a property is requested or changed. When a
property is requested, some computations may need to be made in order to respond to the
request. Also, when a property changes, other properties may be affected and need to be
changed. The actions are carried out by the frame’s methods.
Methods are either inherited from parent frames or native to the frame. Inherited
methods can be changed in the same manner as the properties. Methods are a subroutine
that is attached to the frame to carry out a task on request. For instance, if the dog frame
has a property of birth date and a property of age, then it must also know the current date.
When the current date is set on the frame, then the frame must execute a method to
32
|
Virginia Tech
|
update the age property. Alternatively, when the dog age property is requested, the frame
could execute a method to get the current date and calculate the age. The calling object is
not aware of which method the dog frame uses to calculate the age property. This is a
concept known as encapsulation.
Encapsulation is the process of hiding or encompassing methodology between
objects. In the above example of encapsulation, the dog’s age could be calculated in a
variety of different manners. All the object requesting the age cares about is the dog’s
age, not the manner or methodology that resulted in the age. Because of encapsulation,
objects need to be concerned only about the “what” and not the “how”. Encapsulation’s
greatest use is in creating frames that can be shared among many different styles of
applications. This works very effectively when paired with the concept of multiple
inheritance. Taking the frame above that inherited both the canine frame and the seeing-
eye dog frame, this single frame could be used in two different applications that are
concerned with the object’s different worlds.
Frame-based expert systems can run asynchronously or synchronously. This is
accomplished because the frames will process information only when a property is set or
a method is executed. A synchronous frame-based expert system may monitor sensors
and, on a sensor change, change the system then check that the results from the changes
are acceptable. If the results are acceptable, then the system can make the changes to the
real-world devices, otherwise it can raise an alert. Asynchronous frame-based systems
need to have a frame that controls the simulation time and space. For both cases, when
the systems are initialized, proper instances of the frames must be created and then their
interactions monitored [Durkin, 1994].
For roof support problems that may be encountered in underground construction
that uses roof bolts for supporting systems, work has been done since the 1970s to create
a troubleshooting guide. The guides were not user-friendly and were quite bulky,
requiring an expert to use them. More recently, a frame-based expert system was
developed to diagnose roof support problems and suggest remedies. The system does not
need the user to be an expert in roof support to get a solution to the problem. This
program was developed with the Kappa-PC development tool and runs in the Microsoft
Windows environment. The system is ideal for aiding the mining engineer in roof
33
|
Virginia Tech
|
3. Existing Simulation Packages
Since the early1960s, applications have been developed to simulate the space and
time relationships between mining equipment, mainly in connection with transport
systems [Topuz et al., 1989; Zhao and Suboleski, 1987; Ramachandran, 1983]. However,
for the past 10 years little work has been done on modernizing the simulators by adapting
them to the new computing environments available and by allowing for more
complicated mining plans and extraction procedures. Zhao and Suboleski (1987) give a
detailed account of the existing mine simulators at the time, including CONSIM [Topuz
et al., 1989], FACESIM [Prelaz et al., 1968], and FRAPS [Haycocks et al., 1984]
developed at Virginia Tech, as well as UGMHS developed at Penn State. Additionally,
simulators with graphics or animation capabilities, such as MPASS-2, are also
mentioned. SAM, the simulator developed by Zhao [Zhao and Suboleski, 1987] can be
added to this list as well as FACEPROD, a simulator developed by Hollar [Hollar, 2000].
These dedicated simulation packages were developed in general purpose programming
languages such as Fortran, Pascal, and Basic.
In addition to these simulation packages, programs written in general purpose
simulation languages have recently been applied toward the development of discrete
event simulation software packages for both underground and open-pit mining operations
[Vagenas, 1999; Sturgul, 1999].
3.1 Simulators in Traditional Simulation Languages
Beginning in the late 1950s engineers have been using simulation languages to
investigate underground coal mining systems. These simulations were developed using
traditional simulation languages such as GPSS, GPSS/H, AUTOMOD, SPS, SIMULA,
and so forth. A wide variety of organizations have developed these types of simulations.
Possibly the first paper published on the simulation of mining systems was by
Koenigsburg in 1958. Koenigsburg provided a mathematical solution to a general queue
problem. This paper included a written solution to the production from different numbers
of crews working on different faces for an underground coal mine. Each crew would
work on the face and when it was finished would move to the next face. At the next face,
35
|
Virginia Tech
|
if there were another crew working on the face, the crew would wait in a queue until the
face was clear. The results were put into practice in actual mines in Illinois [Sturgul,
2000].
The GPSS (General Purpose Simulation System) language and the event-driven
version, GPSS/H, are some of the best-documented discrete-event simulation languages
for use in mining situations. In his text Mine Design: Examples Using Simulation, Sturgul
(2000) describes GPSS/H applications for mine design situations. Examples and case
studies in this book, as well as publications by the author, demonstrate the applicability
and ease of use of GPSS/H to mining and minerals engineering simulation problems.
Bethlehem Steel designed a belt simulation named BETHBELT-1 written in GASP V.
GASP V is a programming language that was designed for discrete-event simulation.
Another belt simulation, designed to handle 25 belts and 12 loading zones, was written in
the simulation language known as PL/1.
There are many other examples of the application of simulation languages to
mining, as shown in numerous examples of various simulation languages being used to
do simulations in mining (e.g., Vagenas et al., 1999; Scoble and Vagenas, 1998). Also,
in 1996 the First International Symposium on Mine Simulation via the Internet, was held.
These applications, however, also have many limitations. The most striking limitation is
that, on many occasions, in order to change some of the parameters of a problem
(especially the parameters specifying the geometry of a given layout), the simulation
scenario has to be recomputed.
3.2 Simulators in Dedicated Programming Languages
Since the 1960s there have been many different coal mining face operation
simulators developed using a dedicated programming language. These simulators were
written in Fortran, Basic, or C. FACESIM, CONSIM, UGMHS, SAM, and FACEPROD
are simulators that fit in this category.
FACESIM (FACE operations SIMulator) by Prelaz [Ramachandran, 1983] was
probably the first of this type of simulator. The Office of Coal Research sponsored the
work in the early 1960s. The punch card program was designed to simulate a working
coal section with up to three shuttle cars, one miner or loader, and one dump point.
36
|
Virginia Tech
|
FACESIM was written in Fortran and used a collection of functions to calculate the
various cycles and other times.
CONSIM was created as an update of the FACESIM program and was also
written in Fortran. CONSIM augmented the capabilities of FACESIM including the
ability to account for equipment breakdowns. CONSIM also reduced the amount of input
information by creating a routine to generate some basic cut sequencing information.
CONSIM was used throughout the world, including China and South Africa.
Another Fortran program was written in 1969 named SIMULATOR 1. This
program simulated room-and-pillar face operations [Manula and Suboleski, 1982]. The
Department of Mineral Engineering at Penn State later introduced the Underground
Materials Handling Simulator (UGMHS). UGMHS is broken into many different
subsections that each simulate a single operation. The program was used in productivity
assessment, productivity analysis, and feasibility assessment. In the late 1980s the
Simulations/Animation Model (SAM) was developed. SAM graphically showed the
simulation process as it occurred. The user could watch the simulation in real-time and
or simply get a report. SAM also was the first simulator to present results in a graphical
form [Zhao and Suboleski, 1987].
Ketron, Inc., developed MPASS-2 in 1983 [Haycocks et. al., 1984]. MPASS-2
allows the user to create a room-and-pillar mine that is either a conventional or
continuous system. The program shows the operation of a miner, shuttle car, and roof
bolter. The Department of Mining and Minerals Engineering at Virginia Tech also
developed FRAPS (Friendly Room-and-Pillar Simulation). FRAPS made the input
procedure much easier because the user was interactively prompted for input, and at the
same time FRAPS used special routines to determine logical problems with the input
data. The Chamber of Mines, South Africa, developed COMSIM in 1989. COMSIM can
simulate conventional, continuous, and retreat room-and-pillar mining systems. The
input data is minimal and mostly graphical.
Hollar independently developed the program FACEPROD in 1982 using
Microsoft Quick Basic. FACEPROD is an easy-to-use program that produces accurate
results. One of the unique aspects of this program is the manner in which the data was
stored, allowing the user to save teams of equipment and mine layouts. Then the user can
37
|
Virginia Tech
|
run a simulation with any combination of the two. The Turris Coal Company used the
program from 1986 to 1992. A Windows version that tied into a CAD program and was
written in Visual Basic was begun but never completed [Hollar, 2000].
3.3 Conclusions and Summary
None of the simulators covered in this chapter are Windows-based. Many of the
GPSS/H simulations have graphical output that is viewed from Microsoft Windows.
None of these simulators have the capability of being used over the Internet. The existing
simulators are also created for the general case of one continuous miner, two to three
shuttle cars, one roof bolter, and a single feeder for a given mine layout. There is no tool
for doing simulations with a variety of equipment and changes in geometry. In order to
share the information, its important to have a tool that is Web-based. The limitations of
existing simulators are mainly a product of the technology that was used to create them.
There is no functionality in GPSS/H or Fortran to develop an object-oriented program
that is the basis of expert systems. The expert system approach is the best approach to
allow for any sensible configuration of equipment and mine layout.
38
|
Virginia Tech
|
4. Simulator Structure
4.1 A Frame-Based Expert System
Frame-based expert systems, as described in section 2.2.6, are based on frames (or
objects) that inherit properties from parent frames (or objects). This is done in a manner
similar to object-oriented programming. WebConSim is a frame-based expert system.
This allows for simulations that use any feasible equipment configurations. Within the
simulator, the equipment is not controlled by equations or queues that govern cycle or
service time. Instead, the equipment examines the simulation environment and makes
decisions about its own state based on the environment. This is done by utilizing four
parent frames: application, information, mine, and equipment.
The top-level frame is the application frame; this frame is outlined in Figure 4.1.
The application frame at the beginning of the simulation reads the input information from
the database. Based on the available information, the application frame will build the
child frames that are needed out of the other three parent frame types. Simulation time is
begun at the time specified by the user (simulation need not begin at time zero). This
time is set on the equipment frames that process their current states. The time that the
equipment will need to complete its current activity is set for each piece of equipment.
The simulator finds the minimum of these values (i.e., the next time that something will
occur) and begins over again. This process is shown in a generalized flowchart form in
Figure 4.2. It occurs during the state change processing that the equipment will need
some utility calculations to be made, mainly for statistics. The application frame
provides these calculations. Additionally, the application frame is responsible for
handling the interface of the simulation engine with exterior applications. This frame is
the only frame that is not modified to create the simulation objects.
39
|
Virginia Tech
|
database. Some examples of objects that use the information frame are the cuts, paths,
and statistics objects. These frames are also responsible for tracking the simulation
progress. For example, the locations object tracks the distance that has been mined and
the distance that has been bolted, among others. If there is an error in the simulation
process or in the input data, it most likely will result in an improper request for data. The
information frames are the main data error handlers, raising exceptions when erroneous
data are requested. This alerts the user to a possible problem in the entered data. The
majority of the objects in the simulation engine are based on this frame.
Information
(cid:1) Collect database information
(cid:1) Dispense relevant information
(cid:1) Monitor for incorrect requests
(cid:1) Track relevant simulation variables
Figure 4.3--Information Frame Outline
The mine frame is similar to the application frame in that it is a high level frame,
as shown in Figure 4.4. The mine frame is responsible for containing all the information
about the mine and the contents. The mine object tracks the current cut being mined and
the current cut being bolted. This is relegated to the mine frame instead of an
information frame because the mine frame is referenced by every equipment frame. This
means that the equipment can gain quick access to the information. However, this is not
enough information to justify the overhead of an entire frame object. The mine frame is
also the top-level frame to which every frame will have access. This is why the mine
frame is the vehicle that can share information between information frames and between
information frames and equipment frames.
Mine
(cid:1) Track the current cut
(cid:1) Track the current cut being bolted
(cid:1) Share information about all equipment
(cid:1) Share all information frames
Figure 4.4--Mine Frame Outline
41
|
Virginia Tech
|
The equipment frame provides the logical processing and time advancement
routines for the simulation engine. Figure 4.5 shows the outline of the equipment frame.
The equipment frame has the time set on it by the application frame. When this time is
set, the equipment will check to see whether there is a mining delay, described below. In
the event that the equipment should be delayed, it is. The equipment will check that it is
not time for the equipment to breakdown. If it is time for a failure, then the equipment
will make a snapshot of its current state and conditions that will be reset when the failure
time is over. Otherwise, if the new time is equal to the equipment’s time to next event
(TTNE), or the time that the equipment needs to perform an action, the state change logic
is processed. During the process of state-change logic, many information frames will be
accessed to gather or update information. The equipment frame will calculate its TTNE
and set its current state. When the state is set, the equipment frame will notify the report
collector. The frame will also make certain that if the TTNE is greater than the time until
the next equipment breakdown, then the TTNE is set to the breakdown time.
Equipment
(cid:1) Monitor current state
(cid:1) Change states based on current simulation time
(cid:1) Calculate time to complete current state
(cid:1) Expect when a breakdown will occur
(cid:1) Calculate extent of breakdown
(cid:1) Report relevant information
(cid:1) Cause a time to complete task delay when needed
(cid:1) Update relevant information frames
(cid:1) Collect information from information frames
Figure 4.5--Equipment Frame Outline
The simulator itself is an ActiveX Dynamic Linked Library (DLL), contsim.dll,
that is accessed from the Web site through a binary executable that carries out the
simulation. The object models for the simulator are shown in Figure 4.6 and the
executables are shown in Figure 4.7. The frames that are discussed above are practical
models that are not actually encoded into the system. The simulator DLL was developed
in Microsoft Visual Basic, which has no mechanism for inheritance. The objects that
42
|
Virginia Tech
|
procedure is used for setting the time on the new equipment piece. There are several
properties that are exposed for this object. It is critical to the simulation run that the
dbPath (path to the database locations), Layout (LayoutID in the Layouts table), Team
(TeamID in the Teams table), Path (PathsID in the Paths table), CutSequence (CutsID in
the Cuts table), and Waypoints (WaypointsID in the Waypoints table) properties are set
prior to running a simulation. The simulation is set up by calling the PrepareSim
subroutine. This routine will connect the simulator to the database and will build all the
components into memory. This step is necessary because it allows for outside processes
to connect to each individual object to gather any available information. After setting the
StartTime and EndTime properties, calling the Simulate subroutine will begin the
simulation process.
The report object is an intermediary between the simulator and the application
that is running the simulation. The report object has one major subroutine, AddEvent.
Every object in the simulator calls the AddEvent subroutine when a change has occurred.
The calling object sends an identification number, description, and a copy of itself. The
subroutine keeps a list of all the events that have occurred and searches them for an
infinite loop. The routine determines what type of object is adding the event. Then it
fires an event that corresponds to the object that is changing. For instance, if the shuttle
car has a state change, then the report object will add it to the list and raise the
AddShuttleCarEvent. This event will pass the shuttle car object over to any object that is
connected to the report object. All the reports that are generated use this style of passing
information from one process to another.
The mine object is the only object that has a reference in every other object. This
is the manner in which the equipment is able to check on the conditions of other
equipment and objects. For instance, before the miner starts cutting, it needs to check
whether there is another miner that is cutting in the same air split. It queries the mine
object for the collection of miners. Then it goes through the collection of miners,
excluding itself, checking that the current state is not cutting. The mine object also is
responsible for keeping the information about the mine’s physical parameters. The
various heights and densities are stored in the mine object. The mine object also
46
|
Virginia Tech
|
populates the locations collection. It does this during the PrepareSim routine of the
application object. The locations are the same as the waypoints.
Locations are critical to keeping track of the simulation progress. Each location
knows the location inby, outby, rightby, and leftby. The location also keeps track of each
direction’s total, mined, and bolted distance. The miner and bolter primarily use
locations to keep track of progress. The shuttle cars do not check whether an area is
bolted.
The shuttle cars and all equipment movements are done using the paths. The only
exception to this is when a piece of equipment moves one step inby, outby, rightby, or
leftby. Paths are read in from the database and work on a library system. When a piece
of equipment needs to go from one location to another, it queries the paths object for a
pathinfo object. This object stores the distance, number of turns, and number of check
curtains. Each pathinfo is assigned a unique identification number. When a piece of
equipment receives a path, the identification number is put into a list. This list is used to
ascertain that no two pieces of equipment get the same path at the same time. This is one
of the ways that the simulator does collision detection. It is possible to have several paths
from one location to another. Requested paths can be unidirectional or bi-directional.
Unidirectional paths must be located in the database table with the “from” location and
“to” location, as requested. However, bi-directional paths do not need the proper from
location and to location. For example, if a miner needs to move from location 8 to 11
then it requests a unidirectional path. There must be a record in the path table that has the
information for location 8 to 11. A cable shuttle car that must travel along the same path
from the feeder to the miner can request a bi-directional path. A shuttle car that needs to
travel from the miner at location 11 to the feeder at location 18 would query the paths
object. The database can contain a record that is from location 18 and to location 11 or
vice versa. In the case that the paths object cannot locate a path in the database meeting
the requirements, an error is raised by the path object. This error must be handled by the
application running the simulator. In the applications already developed, a line is written
in the report.
The cuts object stores the cut sequence that is described in the database. The roof
bolter and the miner use this object. There is a global cut and bolting number that allows
47
|
Virginia Tech
|
for multiple miners and multiple bolters to not double-mine or double-bolt. Querying the
cuts object for a cut number returns a cut object. The cut object keeps track of the
location, entry, direction, depth of the cut, location to cut from, and air split. The entry is
entered from the database, but the simulator generates the entry used. The value that is in
the database is used for a double-check that the number generated is acceptable. In the
database table there are two Boolean values: up and right. These two values determine
the direction in accordance with the information in Table 4.1.
Table 4.1--Mining Direction Flags
Up Right Direction or Action
True False Cut inby the indicated location
False True Cut rightby the indicated location
False False Cut leftby the indicated location
True True Raise error--Improper data in cut table
The statistics object is the most important utility calculation that is provided by
the application object. When a statistical calculation needs to be performed, the program
uses the statistics object. This object can perform a statistical calculation using four
different distributions: uniform, normal, exponential, and user-defined. The uniform,
normal, and exponential distributions are described in section 2.1. The user-defined
calculations are based on bins that are stored in the database and entered by the user. The
statistics object calculates a random variable that is the probability. This value is looked
up in the database table for the value to be returned.
4.3 Equipment Objects
Equipment objects are significantly more complicated than the support objects.
This is not simply because they have to process the state changes, but because of all the
variables that must be taken into account for every simple operation. Every equipment
piece has three controlling properties, Time (Figure 4.8), TTNE (Figure 4.9), and State
(Figure 4.10). When these properties are set from either inside the equipment object or
outside the equipment object (e.g., from another equipment object or the application
object), then the state change and reporting mechanisms are activated. When the time
property is set, the equipment first checks whether it is time for a breakdown that was
calculated when the TTNE was set. Then the equipment checks the delay object, if it is
48
|
Virginia Tech
|
in use, to see whether there is a mine delay that affects the equipment. If the time being
set is the same as the object’s TTNE, then the state changes are processed. During the
process of state changes, the equipment will add to a description of the process and
calculation made. This description is used in the verbose report, discussed in section 4.4.
When the state changes, the report object is notified, as described in section 4.2. It is
important to note that the equipment can go into a wait state. In this state the
equipment’s TTNE is set to 109 time units, which was selected to represent “infinite
wait” time. This time represents 31.7 years, and it is not practical to run this type of
simulation for that many time units. Also, equipment can set its TTNE to “immediate
wait,” that is, 10-9. This time is used when the equipment needs to reprocess its state
change logic on the next simulation cycle. When the time is set after the delay check, if
the TTNE is set to the infinite wait then the state change logic is processed during the
next simulation cycle.
49
|
Virginia Tech
|
4.3.2 Roof Bolter
The roof bolter functions similarly to the miner that will be described below.
Table 4.2 shows all the possible states for the roof bolter to be in, with when they are
processed and a flowchart that shows the state process. Roof bolters are not required for
a simulation run to be done. This feature was added to allow for the future addition of a
miner bolter object that is a modification of the miner object. Roof bolters also have the
feature of allowing for a different bolting rate based on the entry that is being bolted.
This accommodates different lengths of bolts varying by the entry. There is an object,
based on the information frame, that is not described in section 4.2. This is the bolt by
entry object. It is responsible for providing the proper roof bolting rate information when
it is being calculated. This feature is not required to run a simulation, but the
functionality is the same, even if all the entries use the same rate information. One other
thing that is unique to WebConSim is that there may be any number of roof bolters in an
equipment group, providing it is practical to do so. The roof bolters will request from the
cuts object the next cut that needs to be bolted and then enter the wait state.
Table 4.2--Roof Bolter States
State Description Flowchart
Broken This state occurs when the bolter is recovering Figure 4.12
from a failure that will stop the bolter from
operating.
Delayed This state occurs when an outside influence has Figure 4.13
prevented the bolter from operating.
Bolting This occurs when the bolter has completed Figure 4.14
bolting an area.
Tramming This occurs when the bolter has arrived at a new Figure 4.15
place to bolt.
Waiting This occurs when the bolt supposes that it will be Figure 4.16
able to tram to a place that requires bolting.
54
|
Virginia Tech
|
4.3.3 Miner
The miner is the only piece of equipment that can control other equipment’s
states. When the miner has sensed that a shuttle car may be ready to be loaded, it is the
miner that changes the shuttle car’s state to “being loaded” and sets its TTNE (Figure
4.21 and Figure 4.22). One of the features that WebConSim has, because it is a frame-
based expert system, is that there may be multiple miners in an equipment configuration.
These miners are allowed to be loading a shuttle car simultaneously, if they are working
on cuts that are in different air splits. The miners also are not confined to a single air
split. When the miner has mined out a place, then it simply requests the next place to
mine from the cuts object. Currently, the only limitation in this system is that the shuttle
cars have the responsibility to select the miner that they will tram to from the feeder.
This will be discussed in section 4.3.4.
Table 4.3--Miner States
State Description Flowchart
Broken This state occurs when the miner is recovering Figure 4.17
from a failure that will stop the miner from
operating.
Delayed This state occurs when an outside influence has Figure 4.18
prevented the miner from operating.
Cutting This occurs when the miner has completed Figure 4.19
loading a shuttle car.
Tramming This occurs when a miner has arrived at the next Figure 4.20
cut.
Waiting to This occurs when the miner is waiting for a Figure 4.21 and
Load shuttle car to be ready for loading. Figure 4.22
Waiting to This occurs when the miner is ready to tram to a Figure 4.23
Tram new cut.
59
|
Virginia Tech
|
is no
current state
waiting to tram?
yes
is there a path no
set TTNE to
available to the next
immediate wait
cut?
yes
no set the TTNE equal to the
is the next area
equipment that is in the next
bolted and clear?
area
yes
calculate the tram distance
adjustment on the path distance
calculate the amount of time to
tram to the next cut
set TTNE equal to the amount
of time it takes to tram and the
state to tramming
Figure 4.23--Miner Waiting to Tram State Process
4.3.4 Shuttle Cars
The shuttle car has, by far, the most states of every piece of equipment. The
description of these states is shown in Table 4. Shuttle cars have so many states because
they have the most opportunity to wait on other equipment. There are two queue points
where a shuttle car can wait, at the miner or the feeder. There are no limitations on the
number of shuttle cars, provided there is at least one. The paths object provides the only
collision detection of shuttle cars by not allowing two shuttle cars to move along the
same path concurrently. One of the limitations of the shuttle car is that in the case of
there being multiple miners, the shuttle car decides which miner should be serviced. The
65
|
Virginia Tech
|
miner is chosen by its current state and the number of other shuttle cars that are servicing
that miner. If the miner is cutting, or waiting to load and there are more than two shuttle
cars servicing it, the shuttle car will look at the next miner. If no suitable miner is found,
the last miner examined will be chosen. For a better simulation of the activities of a
super section (i.e., a mining equipment configuration with two miners and more then five
entries), this behavior must be modified. The application that is running the simulator is
able to modify the miner chosen by trapping the miner-chosen event. One of the reasons
for this limitation is that no standard behavior of shuttle car dispatching was identified.
This behavior varies by organization and inside of organizations. The lack of a
dispatching system is not considered a limitation. There is also the special case of the
shuttle car arriving at a place where the miner had just moved from (i.e., the miner
completed the cut while the shuttle car was tramming to it). When this happens, the
shuttle car will follow the same path the miner took to the new cut.
Table 4.4--Shuttle Car States
State Description Flowchart
Broken This state occurs when the shuttle car is Figure 4.24
recovering from a failure that will stop the shuttle
car from operating.
Delayed This state occurs when an outside influence has Figure 4.25
prevented the shuttle car from operating.
Being Loaded This occurs when the miner is finished loading Figure 4.26
the shuttle car.
Waiting at the This occurs when the shuttle car arrives at the Figure 4.27
Feeder feeder, whether the feeder is occupied or not.
Tramming to This occurs when the shuttle car arrives at the Figure 4.28
the Feeder feeder’s queue point.
Tramming to This occurs when the shuttle car arrives at the Figure 4.29
the Miner miner’s queue point.
Waiting to This occurs when the shuttle car is ready to begin Figure 4.30
Switch in being loaded by the miner and is at the miner’s
queue point.
Unloading This occurs when the shuttle car has completed Figure 4.31
unloading into the feeder.
66
|
Virginia Tech
|
one generating the three standard reports (Figure 4.7) because it has special memory
requirements that would be problematic if multiple reports were requested
simultaneously.
The runsim program has three major classes, as shown in Figure 4.7. Each class
is responsible for generating a different report. The different reports are separated in
multiple classes because each report interacts with the simulator in a different manner.
The extended report gives the state of the mine at every simulation cycle. This
reporting object creates an instance of the simulator’s application object and waits for the
EndGetTTNE event to fire. When this event fires, the extended report class gathers the
current time, current cut, and bolt number. Then it cycles through all the equipment and
collects the current state, last state, location, and TTNE. This information is formatted
onto a single line, as shown in Figure 4.32. This report is useful in examining the state of
the equipment and their interactions throughout the simulation. This report can also be
used as input to a visualization of the simulation. Because this report shows a line only
when there is a change in the simulation, a visualization program can interpolate between
the locations and calculate the state of the equipment between the simulation cycles. This
report is also useful to search for equipment that is being underutilized because it is
waiting or queuing often.
74
|
Virginia Tech
|
interesting results are shown on the standard report, then the verbose report or extended
report can be used to ascertain the source of the anomalies. Those reports can also be
used to locate ways of tweaking the configuration in the mine design.
Figure 4.34--Example of a Standard Report
The multicycle report, shown in Figure 4.35, is an extension of the standard
report. It functions in the same manner as the standard report except that the tables
presented are not built during the simulation. This report is intended to allow the user to
run the same simulation for many cycles and report the output. For equipment that is
using nondeterministic statistical analysis, this type of report will output a more accurate
representation. This report works by placing each cycle’s output into tables in the
sims.mdb, described in section 5.1. After the last simulation cycle, the report generator
creates a summary of the individual tables that are output in the report. Thus, the
multicycle report is identical in form to the standard report but shows summary
information over the course of the many simulation cycles.
77
|
Virginia Tech
|
information in real-time. Many of these applications automate tasks that used to be
preformed by skilled laborers. The major advantage of these systems is that operating
them requires only access to the network on which it is deployed. The other major
advantage to the Web-based deployment is that an upgrade on the software needs to be
installed only on the server location and not on the clients.
5.2.2 Server Information
WebConSim was developed to use all of the advantages of the Internet in the
most advantageous way. Many of the Web servers in use today are Unix-based Apache
servers. This platform is powerful but does not interact well with desktop applications,
mainly because there is no connection to Microsoft’s ActiveX technology. For this
reason, WebConSim was developed on Microsoft Windows NT 4 and Microsoft
Windows 2000 running Microsoft Internet Information Server (IIS) 4 and 5, respectively.
ActiveX technology is the third generation of Microsoft’s OLE (Object Level
Encapsulation) technology. OLE is a framework that allows different software to interact
regardless of its vendor, development time, programming language(s), location on a
machine, or location on the network. This is transparent to most users but is utilized in
the architecture of nearly every major application. For example, the simple action of
copying a graph from Microsoft Excel to Microsoft Word is handled by OLE technology.
Without ActiveX, producing component-based software (like WebConSim) would be
extremely difficult. ActiveX technology allows for the ease of interaction between the
three components of WebConSim.
The Web server software includes Microsoft’s Data Access Components
(MDAC) version 2.6. MDAC is a collection of software objects that allow for the fast
and reliable connection to data using Open Database Connectivity (ODBC). Because of
ODBC, the database, which is described in the database section, is not limited to any
specific vendor (e.g., the database can be a Microsoft Access Database, an Oracle
database, or a Microsoft Excel Spreadsheet). More specifically, not all of MDAC is
utilized by WebConSim, only the ActiveX Data Objects (ADO).
Server security, bandwidth, speed, storage space, and so forth need to be based on
the amount of usage of the simulator, types of simulations to be run, and a company’s
specific needs and concerns. These considerations would need to be studied in a live
80
|
Virginia Tech
|
(one that is being used on a regular basis) installation. Such testing information would be
unattainable from a development version of WebConSim. The IIS administrator on a
particular installation can address these considerations.
5.2.3 WebConSim Implementation
Dynamic Web applications are the heart of business on the Internet and the heart
of WebConSim. Thus, all access to the simulator’s resources is controlled by a Web-
driven front end. This front end is the server side of the client/server architecture, which
supplies the information to the Web browsers (clients).
The Web site is designed to have maintenance and action areas.
• The maintenance area is how the user inputs the information into the simulator
database. This area has features that automate several procedures, such as
equipment copy and paste functions (reducing repetitive entry). These
sections are actually an HTML implementation of managing tabular datasets.
The report maintenance area contains all the output from simulations that the
user has not deleted. These are kept in HTML tables that allow the reports to
be viewed by any standard Web browser. Through the Web site, every
multiple-step process is completed using a wizard.
• The action section is where the user can enter the cut sequence and travel
paths and run the simulation. These three procedures take the user through
wizards that attempt to minimize the amount of information to be entered.
Depending on the number of entries and amount of equipment, the wizards
can generate very large (greater than 800x600 pixel) displays. Almost any
standard Web browser can handle the large displays. The main goal of the
Web site is to minimize the amount of data entered and maximize the user’s
throughput.
The largest design consideration in creating the front end was making certain that
it can be viewed by as many Web browsers as possible, in other words, its universality.
Examples for such clients include all Microsoft Windows, Palm Pilot, Unix, and Apple
Macintosh Web browsers, or any other Internet Explorer (IE) 3.02 compatible browser.
This site was tested using Windows CE 2.0, Windows 2000 (IE 5, Netscape 4.7),
Windows 95 (IE 3.02), and Red Hat Linux 6.2 (Netscape 2.7, Lynx). It should be noted
81
|
Virginia Tech
|
that Lynx’s display is not very compatible with the layout of several of the pages. This is
because it does not support tables. The front end uses tables on nearly every page to
display data. A very advanced user can use WebConSim from Lynx, but a nontable
version would have to be developed to display properly on Lynx.
Internet Explorer and Netscape (both current and older versions) are more than
capable of navigating and using all the site’s features. This is because all HTML tags
that were used are compatible with IE 3.02. Several client-side functions assist in data
validation and site navigation. These scripts were all written in JavaScript in order to be
compatible with Netscape, because Netscape is incapable of processing VBScript.
5.2.4 Active Server Pages
The Internet Information Server (IIS) supports Common Gateway Interface
(CGI), which is the standard for creating dynamic or data-driven Web pages. However, it
supplies an alternative approach, Internet Server Application Programming Interface
(ISAPI). Active Server Pages (ASP) is built on ISAPI to allow for ease of development
of dynamic Web pages. ASP allows a single database to be located on a server and all
Web content to be generated when the browser asks for it. ASP is compatible with all
popular Internet browsers that have been made since 1996, because the browser views
normal Web content and all ASP processing occurs on the server and not on the client.
ASP facilitates a connection between the ASP documents and the ActiveX
resources of the Web server. An ASP document contains a mixture of server-based
programming directives and HTML. When a server gets a request for an ASP document,
it processes the programming directives and creates a “virtual” HTML Web page to send
to the requestor.
The programming directives in WebConSim are written in VBScript. ASP was
chosen for creating the front end for these features. With the exception of the default Web
page, every Web page in the front end is an ASP document. A few documents serve only
a single use. For instance, the home page (once a user has logged in) has the job of
finding the user’s rights, as stored in the database, and displaying a single roadmap to the
data with a bit of summary information. Other such documents are the detailed listing
Web pages. These show all the information to the user about the specific collection that
was requested. The vast majority of the ASP documents (i.e., the forms that allow for the
82
|
Virginia Tech
|
addition or modification of information in the database) operate in multiple modes.
These modes are, generally, add, edit, and delete. Access to the different modes depends
on the user’s rights. These forms send information to ASP documents that actually carry
out the addition or modification and then redirect the client to a proper location in the
Web site.
5.2.5 Front-End Structure
The front-end Web pages are structured in a similar manner to the frame-based
expert system. There are four types of Web pages that are generated by the front end:
summary pages, table listings (Figure 5.1), edit/add pages (Figure 5.2), and do pages.
The listing pages show a summary of the information that is available in the database.
These pages are used to access the data and allow the user to access the information. The
edit/add pages are used for specific data editing. These pages display the information in
the database, allowing the user to add or edit specific records. These pages are
information marshals between the database and the front end. They display information
to the user and allow the user to affect the information in the database via the do pages.
The do pages are never seen by the user unless there is an error in the information on the
form. These pages are responsible for changing the information in the database, based on
the information supplied by the edit/add pages.
83
|
Virginia Tech
|
6 Case Studies
This chapter presents three room-and-pillar mining configurations and their
summarized output from WebConSim. The first case is an unrealistic case that shows the
minimum amount of information that must be entered into the system to produce a
simulation. The second case compares the results from WebConSim to ConSim to show
that the output is comparable. The third case presents current mining conditions and
practices with the results of a simulation.
6.1 A Simple Case
The first case study presents a very simple case. It is presented to show the
amount of information that must be gathered and input into WebConSim. This case has
the following characteristics:
• 480 minutes of simulation time
• A 60 ft. by 60 ft. mine layout
o 6 ft. coal height at 0.042 tons/ft3
o Entry and break width of 20 ft.
o 5 entries
o Difficulty factor of 1
• A team of one miner, one shuttle car, one roof bolter, and one feeder
o Miner
(cid:1) Deterministic statistics
(cid:1) Length of 32 ft.
(cid:1) Tram rate of 66.67 ft./min.
(cid:1) 5 minute end of cut clean-up delay
(cid:1) 10 tons/min. cut rate
(cid:1) 481 minutes between breakdowns and 25 minutes to repair
85
|
Virginia Tech
|
Table 6.3--Summarized Output from Simple Case
Cut Cycle Time Tonnage Mining Rate (tons/min)
1 61.25 25.2 0.41
2 61.25 25.2 0.41
3 61.25 25.2 0.41
4 61.25 25.2 0.41
5 61.25 25.2 0.41
6 61.25 25.2 0.41
7 61.25 25.2 0.41
8 51.25 19.9 0.39
Table 6.4--Summarized Output from Simple Case with Two Shuttle Cars
Cut Cycle Time Tonnage Mining Rate (tons/min)
1 52 25.2 0.48
2 52 25.2 0.48
3 52 25.2 0.48
4 52 25.2 0.48
5 52 25.2 0.48
6 52 25.2 0.48
7 52 25.2 0.48
8 52 25.2 0.48
9 52 25.2 0.48
10 12 5.82 0.44
6.2 ConSim Example Revisited
The ConSim user manual has an example case that was used to demonstrate
information that must be entered to use ConSim [Topuz, 1989]. The user manual also
has the output from the program. This case was chosen as a test case to show that
WebConSim is capable of simulating older mining conditions and that the output from
WebConSim is similar to the output of an accepted simulation program. The ConSim
example case that is demonstrated in the user manual has the following characteristics:
• One miner, two shuttle cars, one roof bolter, and one dump.
• The feeder is a one-way feeder.
• The shift lasts 360 minutes.
• The coal is 0.042 tons/ft3.
• The miner can tram only at 0.015 min/ft.
• The roof bolter can tram only at 0.01 min/ft.
• Shuttle cars take 0.15 minutes to switch in or out.
• The miner and roof bolter have a 4 minute cleanup time.
89
|
Virginia Tech
|
• There can be nothing left in the cut.
• The miner will break down on average every 481 minutes and will be broken for
25 minutes on average.
• The roof bolter will break down on average every 1462 minutes and will be
broken for 66 minutes on average.
• The first shuttle car will break down on average every 419 minutes and will be
broken for 10 minutes on average.
• The second shuttle car will break down on average every 550 minutes and will be
broken for 15 minutes on average.
• Every cut will be 20 ft. wide and 20 ft deep.
• The pillars are 80 ft. square.
• The miner is 32 ft. long.
• The shuttle car is 28 ft. long.
• The feeder is 30 ft. outby the crosscut.
• There are 5 entries.
• The feeder is located in entry 3.
• The coal is 6 ft. high.
• The roof bolter takes 48 minutes to bolt a place.
• The distribution of the miner’s loading rate is shown in Table 6.5.
• The distribution of the load carried by the shuttle car is shown in Table 6.6.
• The distribution of a loaded shuttle car tram rate from the miner to the change
point is shown in Table 6.7.
• The distribution of a loaded shuttle car tram rate from the change point to the
feeder is shown in Table 6.8.
• The distribution of an unloaded shuttle car tram rate from the feeder to the change
point is shown in Table 6.9.
• The distribution of an unloaded shuttle car tram rate from the change point to the
miner is shown in Table 6.10.
• The distribution of the shuttle car’s discharge rate is shown in Table 6.11.
90
|
Virginia Tech
|
The input information that is presented, which is needed by ConSim, is not
exactly the same as the input information needed for WebConSim. WebConSim requires
much more information to perform a simulation. A summary of the information that was
used as input to WebConSim follows:
• A 80 ft. by 80 ft. mine layout
o 6 ft. coal height at 0.042 tons/ft3
o Entry and break width of 20 ft.
o 5 entries
o Difficulty factor of 1
• A team of one miner, two shuttle cars, one roof bolter, and one feeder
o Miner
(cid:1) Deterministic statistics
(cid:1) Length of 32 ft.
(cid:1) Tram rate of 66.67 ft./min.
(cid:1) 4 minute end of cut clean-up delay
(cid:1) 11 tons/min. cut rate
(cid:1) 481 minutes between breakdowns and 25 minutes to repair
o Shuttle cars
(cid:1) User-defined statistics based on the values in Table 6.7 to Table
6.11
(cid:1) Length of 28 ft.
(cid:1) Capacity of 6 tons
(cid:1) No tramming delays
(cid:1) Switch in and out delay of 0.15 min.
o Roof Bolter
(cid:1) Deterministic statistics
(cid:1) Length of 30 ft.
(cid:1) 0.38 ft./min. bolting rate in every entry
(cid:1) 1462 minutes mean time between breakdowns and 66 minutes
mean time to repairs
(cid:1) 100 ft./min. tram rate
95
|
Virginia Tech
|
o Feeder
(cid:1) Deterministic statistics
(cid:1) Capacity of 30 tons
(cid:1) Located in entry 3
(cid:1) Minimum number of breaks outby the working face is 2 and the
maximum is 4
(cid:1) 1500 minutes mean time between breakdowns and 15 minutes
mean time to repairs
(cid:1) Load out rate of 20 ton/min.
• Paths are shown in Table 6.12
• Cuts are shown in Table 6.13
Table 6.12--Paths Used in WebConSim
From Waypoint To Waypoint Distance (feet)
11 18 280
12 18 180
13 18 80
14 18 180
15 18 280
15 11 400
Table 6.13--Cuts Used in WebConSim
Cut Number From Waypoint Depth (feet) Direction
1 15 20 Up
2 14 20 Up
3 13 20 Up
4 12 20 Up
5 11 20 Up
6 15 20 Up
7 14 20 Up
8 13 20 Up
Using the above input data into both ConSim and WebConSim, the output from
both programs is summarized in Table 6.14 and Table 6.15. These two tables show that
the results from both simulations are with in 10% of each other. It is important to note
that the tonnage in both cases is held constant because the cut size is consistent between
the two. Also, because the same information is used to develop both simulations, it
proves that WebConSim is as accurate as ConSim. This is further proven by the details
96
|
Virginia Tech
|
Table 6.17--WebConSim Detail Output
Shuttle Car
Loading Shuttle Switching Miner Waiting on Equipment
Cut Car (min) (min) Shuttle Car (min) Tramming (min)
1 19.9 5.2 9.7 7.9
2 17.0 5.1 6.5 7.9
3 17.9 7.9 2.4 7.9
4 19.0 4.9 4.7 7.9
5 18.0 5.3 9.9 12.5
6 19.8 7.3 10.2 8.4
7 16.9 6.9 5.3 8.5
8 19.7 10.0 1.6 7.4
6.3 Longwall Development Section
A longwall requires a large block of coal to be exposed. This is done using room-
and-pillar mining development to expose the large block. In many cases the longwall
development section must be very productive to allow for the longwall equipment to be
moved into place. This will assure that the expensive longwall equipment will be utilized
economically. This example is of a typical longwall development section using data that
was collected from the Peabody Group in St. Louis, Missouri. This team works on a 3
entry mine utilizing one miner, three shuttle cars, one feeder, and one roof bolter. This
example also used the delay object in WebConSim. Using the equipment configuration
under ideal conditions yields results that are too productive. The delay object is used to
make the results more realistic. The input data used are as follows:
• 420 minute simulation time
• Mine layout
o 3 entries.
o Difficulty factor of 1.
o Pillars are 100 ft. long and 70 ft. wide.
o Entries and breaks are 20 ft. wide.
o The coal is 6 ft. high with 2 in. of ceiling taken.
o The coal and ceiling are 0.048 tons/ft3 dense.
98
|
Virginia Tech
|
o Feeder
(cid:1) Statistics are normal.
(cid:1) Located in entry 1.
(cid:1) Hopper can hold 16 tons.
(cid:1) Can load the belt at 10 tons/min.
(cid:1) Breakdowns on average will occur every 400 minutes for an
average of 10 minutes.
o Delays
(cid:1) Physical conditions will delay operations on average every 60
minutes for an average of 10 minutes.
(cid:1) OutBy operations will delay operations on average every 50
minutes for an average of 5 minutes.
(cid:1) Methane will delay operations on average every 1520 minutes for
an average of 25 minutes.
(cid:1) Belts will delay operations on average every 55 minutes for an
average of 30 minutes.
• The cut sequence is summarized in Table 6.18
• The paths used are summarized in Table 6.19
• The section is displayed in Figure 6.5
Table 6.18--Cut Sequence Summary
Cut Number From Waypoint Depth (feet) Direction
1 7 35 Up
2 8 35 Up
3 9 35 Up
4 7 35 Up
5 8 35 Up
6 9 35 Up
7 7 35 Up
8 8 35 Up
9 9 35 Up
10 4 35 Right
11 5 35 Right
12 4 35 Right
13 5 35 Right
100
|
Virginia Tech
|
26 27
22 25 28 21 24 26 23
19 18 20
16 14 17
9 8 10
12 15 11 13
6 5 7
3 2 4
1
Figure 6.5--Longwall Development Section
Using this input data, a simulation was run that showed in a shift, on average
(using 100 cycles), five cuts could be mined. A summary of the output is presented in
Table 6.20. Using the reporting capabilities of WebConSim, it is apparent that the miner
is spending time waiting for the next cut area to be bolted. This is apparent because,
taking the average bolting rate, it will take 25 minutes to bolt a place, whereas the miner
can mine a place in 12 minutes, providing there is no wait on a shuttle car. Using three
shuttle cars, there is little wait on shuttle cars, amounting to about 15% of the miner’s
time. This means that the miner can mine a place in about 18 minutes. This is still much
shorter than the amount of time it takes for the roof bolter to bolt a place and then move
102
|
Virginia Tech
|
on to the next place. Assuming that the delays cannot be corrected, an increase in bolting
rate or another roof bolter should be added to this section to improve the performance.
Table 6.20--Summary of Longwall Section Output
Cut Cycle Time Tonnage Mining Rate (tons/min)
1 80.3 207.2 2.58
2 80.3 207.2 2.58
3 80.3 207.2 2.58
4 80.3 207.2 2.58
5 80.3 207.2 2.58
6 22.7 52.5 2.20
6.4 Conclusions and Summary
Three examples are shown in this chapter. The first is presented only to show the
amount of information that must be entered into the system and the ease with which data
can be reused. The second compares WebConSim to ConSim. This example shows that
the output from WebConSim is comparable to the output from accepted simulation
programs. The third example, of a longwall development section, shows how
WebConSim can simulate current mining practices, with data from the Peabody Group.
The output from the program can be used to project mining rates for designed mines that
can be used as input to do long-term forecasting. This data can also be used to evaluate
the performance of existing mining. Additionally, the data can be used to perform what-
if analyses. These examples demonstrate, in a cursory manner, the power of
WebConSim.
103
|
Virginia Tech
|
7. Conclusions and Further Work
In conclusion, this thesis has presented the need for tools that can aid in the
continuing increase in productivity for current and future mining conditions. The
simulator presented, WebConSim, has two main objectives: to reflect current and future
room-and-pillar coal mining practices and to utilize standardized computing technology.
These two objectives are symbiotic, in that, to reflect current and future mining practices,
standardized computer technology must be utilized. This simulator is built on a frame-
based expert system architecture that is implemented as a client/server application. The
frame-based expert system architecture allows for the system to adapt to new mining
practices. The client/server architecture allows the simulator to be interfaced through a
Web-based front end. Additionally, with the new technologies in database programming,
the data used by the simulator can be from virtually any data source that the user chooses.
WebConSim is a new tool that can simulate a wide variety of mining conditions.
WebConSim will require several different areas of future research and continued
development. The lack of a dispatching system for the equipment, especially the shuttle
cars, is a major weakness in the system. This is due to the lack of a standardized system
across the mining industry. Still, a dispatching simulation can be developed that can
reflect the wide variety of dispatching logic that exists. Two equipment types are missing
from the system that are widely used in the coal mines, the miner-bolter and bridge
conveyors. A description of how to include a miner-bolter is included in the text, but this
equipment type should be included as an equipment object. The bridge conveyor can
also be implemented by adjusting the input values for a shuttle car. However, the new
developments in bridge conveyor technology have shown that this equipment piece must
be added, as well. With the addition of these new equipment types, the simulator will
also require a manner of error checking to be certain that an equipment group has the
proper equipment to mine. With these additions the simulator itself will be made more
universal.
The extended report can be read into an animation system, as explained in the
text. Such an animation would have several applications in training and design. For
104
|
Virginia Tech
|
Numerical Modeling of Room-and-Pillar Coal Mine Ground Response
Benjamin P. Fahrman
(ABSTRACT)
Underground coal mine ground control persists as a unique challenge in rock mass
engineering. Fall of roof and rib continue to present a hazard to underground personnel.
Stability of underground openings is a prerequisite for successful underground coal mine
workings. An adaptation of a civil engineering design standard for analyzing the stability of
underground excavations for mining geometries is given here. The ground response curve–
developedoverseventyyearsagoforassessingtunnelstability–hassignificantimplicationsfor
thedesignofundergroundexcavations,buthasseenlittleuseincomplexminingapplications.
The interaction between the small scale (pillar stress-strain) and the large scale (ground
response curve) is studied. Further analysis between these two length scales is conducted to
estimate the stress on pillars in a room-and-pillar coal mine. These studies are performed in
FLAC3D by implementing a two-scale, two-step approach. This two-scale approach allows
for the interaction between the small, pillar scale and the large, panel scale to be studied in
a computationally efficient manner.
That this work received support from the National Institute of Occupational Safety and
Health’s ”Capacity Building Project” is purely coincidental.
|
Virginia Tech
|
Acknowledgments
Endeavors such as this are never completed alone. There are too many people to thank to
include them all here, but...
I’d like to start by thanking my family–my parents, especially. Your support has helped
me accomplish things you always swore were possible, but of which I was never sure.
Thank you to the friends I’ve made during my extended time in Blacksburg. You’re
the reason I’ve loved this place. Of late, extreme appreciation is felt for Alma, in particular.
Sanity can be fleeting during trying times and you have helped me keep mine (mostly) intact.
Thanks is owed to the past and present members of Holden 115A (and beyond!). We’re
all in this together, and you’ve helped me succeed in countless ways. I won’t name you all,
but Brent and Will deserve special thanks: Brent for his willingness to argue about what
the ground response curve really means when applied to mining, and Will for his time and
effort payed into catching me up on all the ins and outs of our case study.
I appreciate the support I’ve received from the members of my committee. I’ve learned
a lot from each of you, and it’s been a pleasure working with you. I’d like to thank Dr. Erik
Westman, in particular, for finding a way to help me achieve things I didn’t think possible,
and Mario for his encouragement for me to succeed.
I’ll also thank Magpie and Bode, the friendly creatures of my unlikely Walden Pond:
Magpie for her unwavering inclination to contribute to my writing, and Bode for his willing-
ness to distract me–typically only when I wished.
Finally, I’ll thank the great students with whom I’ve had the joy of interacting. In the
few short years of my teaching, you have kindled in me the desire to complete a terminal
degree in the hopes of securing a position for future teaching. You, more than anyone else,
have pushed me to finish this.
iii
|
Virginia Tech
|
2.11 Mohr’s circle showing progression to failure by increasing σ (left). Mohr’s
1
circle showing progression to failure by decreasing σ (center). Mohr’s circle
3
showing progression to failure by increasing pore pressure, p, (right). . . . . 21
2.12 Vertical cross-section of a circular tunnel of radius, R, with internal pressure,
p , which is analyzed for the convergence-confinement method (CM). . . . . 26
0
2.13 Typical ground reaction curve (GRC) as determined via the convergence-
confinement method (CCM). . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.14 Typical ground reaction curve (GRC) and linear displacement profile (LDP)
as determined via the convergence-confinement method (CCM). . . . . . . . 28
3.1 Injuries due to fall of roof or rib between 2006 and 2014. This includes fatal
injuries, non-fatal injuries with days lost, and injuries with no days lost [1]. . 37
3.2 Ground reaction curve (GRC) with two possible outcomes shown. The solid
line represents stable convergence, and the dashed line represents unstable
convergence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Contour map showing depth of cover over room-and-pillar panel 2E. The
marked drillhole locations represent the locations of the two holes drilled into
the roof for installation of the seismic array from which core samples were
collected and tested for rock mechanics properties. . . . . . . . . . . . . . . . 39
3.4 Depiction of the inclined fractures observed within the Jawbone seam in the
studied panel. Also shown is the typical roof and floor material. . . . . . . . 40
3.5 Geologic column summarizing the cores collected from Core Hole 679, which
was drilled directly above Panel 2D prior to mine development. More infor-
mation on Panel 2D can be found in Chapter 4. . . . . . . . . . . . . . . . . 41
3.6 Stress-strain response of the two square pillars–of width to height ratios equal
to six and eight–with elastic roof and floor. . . . . . . . . . . . . . . . . . . . 44
3.7 Stress-strain response of a rectangular pillar with roof and floor properties
listed in Table 3.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.8 Ground response curve of Panel 2E. . . . . . . . . . . . . . . . . . . . . . . . 46
3.9 Groundresponsecurveof Panel 2Eplottedalongwith thealteredstress-strain
response of a pillar within the panel for both the shale with weak laminations
and the shale with strong laminations. . . . . . . . . . . . . . . . . . . . . . 47
x
|
Virginia Tech
|
4.1 Injuries due to fall of roof or rib between 2006 and 2014. This includes fatal
injuries, non-fatal injuries with days lost, and injuries with no days lost [3]. . 51
4.2 Contour map showing depth of cover over room-and-pillar panel 2D, center
frame. The drillhole location marked is Core Hole 679, which was drilled from
the surface prior to mine development. . . . . . . . . . . . . . . . . . . . . . 53
4.3 Geologic column summarizing the cores collected from Core Hole 679, which
was drilled directly above Panel 2D prior to mine development. The location
of the core hole relative to Panel 2D is shown in Figure 4.2. . . . . . . . . . . 55
4.4 Depiction of the inclined fractures observed within the Jawbone seam in the
studied panel. Also shown is the typical roof and floor material. . . . . . . . 56
4.5 An example of a coal pillar geometry in FLAC3D. The pillar shown is one
quarter of a pillar which has a width to height ratio of 8. . . . . . . . . . . . 58
4.6 Stress-strain response of the two square pillars–of width to height ratios equal
to six and eight–with elastic roof and floor. . . . . . . . . . . . . . . . . . . . 59
4.7 Stress-strain response of a rectangular pillar with roof and floor properties
listed in Table 4.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.8 Panel scale model with representative overlying strata up to approximate sur-
face topography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.9 Vertical stress on the pillars in the panel. The stress values are determined by
dividing the average vertical stress on the fictitious material in the large-scale
model by (1−ER). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.10 Safety factor of the pillars in Panel 2D. Strength is found from the empirical
Mark-Bieniawski strength equations, and stress is determined from the two-
scale modeling approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.11 Safety factor of the pillars in Panel 2D assuming weaker laminations in the
roof and floor material. Strength is found from the empirical Mark-Bieniawski
strength equations, and stress is determined from the two-scale modeling ap-
proach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.1 Example probability distributions of stress and strength . . . . . . . . . . . . 74
5.2 Example output distribution of factor of safety . . . . . . . . . . . . . . . . . 75
xi
|
Virginia Tech
|
5.3 Tributary area in a room-and-pillar mine . . . . . . . . . . . . . . . . . . . . 76
5.4 ARMPS stability factor, deterministic factor of safety, and the probability of
failure for the first scenario (40 ft x 40 ft pillars) plotted versus depth of cover. 79
5.5 Factor of safety histogram for the second mine geometry (40 ft x 40 ft pillars)
at a depth of 1100 ft. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.6 ARMPS stability factor, deterministic factor of safety, and the probability of
failure for the first scenario (60 ft x 60 ft pillars) plotted versus depth of cover. 81
A.1 Contour map showing depth of cover over room-and-pillar panel 2E. The
marked drillhole locations represent the locations of the two holes drilled into
the roof for installation of the seismic array from which core samples were
collected and tested for rock mechanics properties. . . . . . . . . . . . . . . . 92
A.2 Cores collected from Hole #1. Total length of core collected (all shown) is
about 79 inches of the total ten feet drilled. . . . . . . . . . . . . . . . . . . 93
A.3 Cores collected from Hole #2. The drilled length of Hole #2 is 58 inches, and
everything recoverable was collected (all shown). . . . . . . . . . . . . . . . . 93
A.4 Stress-strain curves of the four specimens subjected to UCS testing. (a) Sam-
ple #1; (b) Sample #2; (c) Sample #3; and (d) Sample #4. . . . . . . . . . 99
A.5 Specimen prepared for the Brazilian test sitting inside of the apparatus used
to apply a load according to ISRM suggestions [4]. . . . . . . . . . . . . . . . 101
A.6 Raw load vs. displacement curves for Sample #1 from Hole #1 (left) and for
Sample #1 from Hole #2 (right). . . . . . . . . . . . . . . . . . . . . . . . . 102
A.7 Distribution of UCS estimates for Holes #1 and #2 as determined from point-
load testing with a K-value of 21. . . . . . . . . . . . . . . . . . . . . . . . . 104
A.8 Distribution of uniaxial tensile strength (UTS) estimates for Holes #1 and
#2 as determined from point-load testing. . . . . . . . . . . . . . . . . . . . 105
B.1 Raw load-displacement curve recorded during UCS testing of Sample #1. . . 122
B.2 Raw load-displacement curve recorded during UCS testing of Sample #2. . . 122
B.3 Raw load-displacement curve recorded during UCS testing of Sample #3. . . 123
B.4 Raw load-displacement curve recorded during UCS testing of Sample #4. . . 123
xii
|
Virginia Tech
|
B.7 Averaged s-type travel times, measured p-arrivals and length, and calculated
p- and s-wave velocities through the four samples subjected to UCS testing.
Average s-wave travel time is determined by averaging the difference between
the measured arrivals with and without the sample present. Wave velocities
are the ratio of the sample length to the travel time measured/calculated. . . 117
B.8 Density of the four UCS samples. . . . . . . . . . . . . . . . . . . . . . . . . 119
B.9 Density of the ten specimens comprising the Brazil test sample from Hole #1. 119
B.10 Density of the ten specimens comprising the Brazil test sample from Hole #2. 120
B.11 Peak load and unconfined compressive strength of the four samples subjected
to UCS testing where UCS, C , is determined from C = P; where P is the
0 0 A
peak load and A is the cross-sectional area of the sample. . . . . . . . . . . . 137
B.12 Peak applied load to the ten specimens comprising the sample from Hole #1.
A boolean value also accompanies each sample entry which indicates whether
the specimen failed along the vertical diameter–indicating a valid test. . . . . 139
B.13 Peak applied load to the ten specimens comprising the sample from Hole #2.
A boolean value also accompanies each sample entry which indicates whether
the specimen failed along the vertical diameter–indicating a valid test. . . . . 140
B.14 Peak load and indirect estimate of tensile strength of the ten specimens which
gave meaningful results by failing along the vertical diameter. . . . . . . . . 141
B.15 Sample dimensions and pressure of the hydraulic fluid measured at failure
for the fifteen specimens comprising the Hole #1 sample subjected to the
point load test. A boolean value also accompanies each sample entry which
indicates whether the fracture surface of the specimen passed through both
loading points–indicating a valid test. . . . . . . . . . . . . . . . . . . . . . . 143
B.16 Sample dimensions and pressure of the hydraulic fluid measured at failure
for the fifteen specimens comprising the Hole #2 sample subjected to the
point load test. A boolean value also accompanies each sample entry which
indicates whether the fracture surface of the specimen passed through both
loading points–indicating a valid test. . . . . . . . . . . . . . . . . . . . . . . 144
xix
|
Virginia Tech
|
B.17 Values calculated to determine a UCS estimate from the point-load test for
the valid test specimens in the sample from Hole#1. Peak load is calculated
by multiplying the hydraulic pressure listed in Table B.16 by the effective jack
piston area (1.469 in). Point load index is determined as outline in [1]. UCS
is estimated using the multiplier suggested in [2]. . . . . . . . . . . . . . . . 145
B.18 Values calculated to determine a UCS estimate from the point-load test for
the valid test specimens in the sample from Hole#2. Peak load is calculated
by multiplying the hydraulic pressure listed in Table B.16 by the effective jack
piston area (1.469 in). Point load index is determined as outline in [1]. UCS
is estimated using the multiplier suggested in [2]. . . . . . . . . . . . . . . . 146
D.1 Numerical modeling input parameters for the coal pillar which resulted in the
stress-strain curve shown in Figure D.1. . . . . . . . . . . . . . . . . . . . . . 159
D.2 Numerical modeling input parameters for the coal pillar which resulted in the
stress-strain curve shown in Figure D.2. . . . . . . . . . . . . . . . . . . . . . 160
D.3 Numerical modeling input parameters for the coal pillar which resulted in the
stress-strain curve shown in Figure D.3. . . . . . . . . . . . . . . . . . . . . . 161
D.4 Numerical modeling input parameters for the coal pillar which resulted in the
stress-strain curve shown in Figure D.4. . . . . . . . . . . . . . . . . . . . . . 162
D.5 Numerical modeling input parameters for the coal pillar which resulted in the
stress-strain curve shown in Figure D.5. . . . . . . . . . . . . . . . . . . . . . 163
D.6 Numerical modeling input parameters for the coal pillar which resulted in the
stress-strain curve shown in Figure D.6. . . . . . . . . . . . . . . . . . . . . . 164
D.7 Numerical modeling input parameters for the coal pillar which resulted in the
stress-strain curve shown in Figure D.7. . . . . . . . . . . . . . . . . . . . . . 165
D.8 Numerical modeling input parameters for the coal pillar which resulted in the
stress-strain curve shown in Figure D.8. . . . . . . . . . . . . . . . . . . . . . 166
D.9 Numerical modeling input parameters for the coal pillar which resulted in the
stress-strain curve shown in Figure D.9. . . . . . . . . . . . . . . . . . . . . . 167
D.10 Numerical modeling input parameters for the coal pillar which resulted in the
stress-strain curve shown in Figure D.10. . . . . . . . . . . . . . . . . . . . . 169
xx
|
Virginia Tech
|
Chapter 1
Introduction
RecentprojectionsmadebytheUSEnergyInformationAdministration(EIA)predictenergy
demands to increase steadily in the near future. Combustion of coal remains to be a large
source of power in the United States and around the world. 40% of all coal produced is
produced from underground coal mines, and 40% of that comes from room-and-pillar mines
[1]. Despite the fact that 2015 saw the greatest percentage decrease in US coal production
ever, the EIA foresees coal production to remain steady or increase over the next few years
[2].
Ground control is a primary design consideration during the planning and development
of underground coal mines. Day-to-day functions in underground coal mines, like proper
ventilation and material haulage, depend on successful ground control design and implemen-
tation. Inaddition, groundcontroldesignconsiderationsprovidestructuralstabilityformine
workings. Successful ground control is an integral condition for successful mining projects
[3].
Geotechnical considerations for coal mine design include a unique set of challenges.
Coal measure rocks vary considerably in strength and are often highly jointed or laminated.
Furthermore, the soft, sometimes unpredictable nature of coal consistently provides greater
complexity for underground coal mine design.
Despite a clear focus on safe and effective mine design from a ground control perspective
for decades, ground control issues continue to result in injuries and fatalities. Over 16% of
all reported incidents in underground coal mines from 2006 to 2014 were due to fall of roof or
rib [4]. The total number of injuries attributed to ground control issues in underground coal
mines in the US is shown in Figure 1.1. There has been a downward trend in the number of
1
|
Virginia Tech
|
2
Figure 1.1: Injuries due to fall of roof or rib between
2006 and 2014. This includes fatal injuries, non-fatal
injuries with days lost, and injuries with no days lost
[4].
incidents for many years, but ground control issues remain a considerable hazard.
The ground response curve (GRC) is a useful tool to aid in the design of underground
openings. The GRC originated from the convergence-confinement method (CCM)–a design
standard of the tunneling industry. In its original form, CCM was an attempt to quantify
the reaction of a rock mass to the development of underground openings within it. A result
of this analytical method is the GRC which is the relationship between the reduced radial
pressure inside of a circular tunnel due to excavation and the radial convergence of that
excavation [5]. A depiction of a ground response curve is shown in Figure 1.2.
Ground response curves are plotted on pressure-convergence axes. The curve begins on
the pressure-axis at a value equal to the in situ stress state. As the internal pressure of the
rock mass is reduced due to an approaching excavation, convergence is expected. The GRC
represents the internal pressure required to prevent further convergence. The curve has great
utility for estimating both the self-supporting capacity of an underground opening, as well
as the type of support which should be installed [6].
The convergence-confinement method, when applied to circular tunnels, often predicts
stable convergence [5]. Stable convergence is represented by the solid line in Figure 1.2,
which shows convergence associated with an underground opening which has sufficient self-
supporting capacity to limit convergence naturally. That is, if no artificial support is in-
stalled, the total convergence of the excavation is expected to be the intersection of the GRC
|
Virginia Tech
|
3
Figure 1.2: Ground reaction curve (GRC) with two possible outcomes
shown. The curve with the solid line-type represents stable conver-
gence, and the curve with the dashed line-type represents unstable
convergence.
and the convergence-axis, where the internal pressure is zero.
Working sections in retreating room-and-pillar mines are not expected to experience
stable convergence [7]. The dashed line in Figure 1.2 shows an underground opening which
is expected to experience unstable convergence. The minimum point in the curve represents
a loss of self-supporting capacity of the rock mass. Full collapse of the opening is expected
after the rock mass no longer has the structural integrity to support its own weight. A loss of
self-supporting capacity is represented by the increasing slope of the GRC where the internal
pressure required to prevent future convergence approaches the original lithostatic stress.
The geometry of underground mine openings tends to be far more complex than that
of the tunnels in civil engineering projects. While analytical solutions such as that obtained
from the CCM are prohibitively complex for mine geometries, a ground response curve may
still be obtained. Numerical models can been used in an attempt to solve for the GRC for
mining applications.
Numerical models were used to estimate the ground response curves for a room-and-
pillar coal mine in the central Appalachian coal fields of the eastern US. In addition, loads on
the pillars were estimated during advance by using numerical models. All of the numerical
|
Virginia Tech
|
4
modeling performed for this study was completed in FLAC3D (Fast Lagrangian Analysis of
Continua in 3-Dimensions) [8].
The interaction between the large, panel scale and the smaller, pillar scale has been
explored previously using numerical modeling in FLAC3D. [9] used an explicitly modeled
pillar geometry within a panel-scale model. Including an explicitly modeled pillar array–
which requires fine discretization–within a FLAC3D model with large spatial extent results
incomputationalinefficiency. FLAC3Disadvertisedtohaveslowexecutiontimesifthereare
significant differences in zone sizes [10], which would be required for explicit representation
of pillars in a large scale model.
The ground response curve predicts the response of the rock mass on a large scale,
so large-scale models are required to obtain reasonable results. However, the GRC and
pillar loading depend on small-scale interactions in and around underground openings. A
model with fine enough discretization to represent the small-scale interactions would be
prohibitively inefficient in large-scale models. A computationally efficient large-scale model
would be too coarse to capture the small-scale effects accurately. In order to model these
phenomena in a computationally reasonable manner, a two-scale approach was used.
The two-scale approach begins with modeling on a pillar-scale. The stress-strain re-
sponse of the pillars in small-scale models is then incorporated into much larger models. A
fictitious material which represents the behavior of the excavated coal seam is created by
implementing a user-defined constitutive model in FLAC3D. This user-defined constitutive
model is applied to the zones within the modeled coal seam so they respond as an excavated
coal seam would, but with a discretization far too coarse to model the pillars explicitly.
Lab testing was performed on samples of roof rock to estimate material properties in
order to improve the accuracy of the numerical models. The samples were obtained from
core holes drilled into the roof from the mine workings. The destructive laboratory tests
which were performed include uniaxial compressive strength testing, Brazilian testing, and
point-load testing. Measurements were also taken to estimate the acoustic properties and
density of the roof rock.
|
Virginia Tech
|
5
Works Cited
[1] US EIA. Annual coal report 2013. US Energy Information Administration, Washington,
DC, 2013.
[2] US EIA. Short term energy outlook. Energy Information Administration, Department
of Energy, 2016.
[3] Christopher Mark and Thomas M Barczak. Fundamentals of coal mine roof support.
New Technology for Coal Mine Roof Support, Proceedings of the NIOSH Open Industry
Briefing, NIOSH IC, 9453:23–42, 2000.
[4] Mine safety and health administration.
[5] Edwin T Brown, John W Bray, Branko Ladanyi, and Evert Hoek. Ground response
curves for rock tunnels. Journal of Geotechnical Engineering, 1983.
[6] Pierpaolo Oreste. The convergence-confinement method: roles and limits in modern
geomechanical tunnel design. American Journal of Applied Sciences, 6(4):757, 2009.
[7] B Damjanac, M Pierce, and M Board. Methodology for stability analysis of large room-
and-pillar panels. In 48th US Rock Mechanics/Geomechanics Symposium. American
Rock Mechanics Association, 2014.
[8] Itasca Consulting Group, Inc., Minnesota. Fast Lagrangian Analysis of Continua in
3-Dimensions, version 5.0, manual, 2013.
[9] Essie Esterhuizen, Chris Mark, and Michael Murphy. The ground response curve, pillar
loading and pillar failure in coal mines. In 29th International Conference on Ground
Control in Mining, 2010.
[10] Inc. Itasca Consulting Group. Flac3d training course - basic concepts and recommended
procedures, 2015.
|
Virginia Tech
|
Chapter 2
Literature Review
2.1 Introduction
An effective ground control design plan is a prerequisite for safe, efficient, and successful
mining operations. In effect, ground control is the prediction of the response of a rock mass
to excavation, as well as the prevention of any undesired outcomes regarding rock mass
movement or stability. To lay the foundation for discussing the mechanical response of rock
masses to excavation, an overview of mechanics is first given. The mechanics overview is
then restricted to brittle materials like rock and then large-scale rock masses.
2.2 Mechanics Overview
Stress forms inside of a deformable body when an external force is applied. The stress
produced is directly proportional to the force applied and inversely proportional to the area
over which the force acts. Stresses which are perpendicular to the area on which they act
are called normal stresses, and stresses which act parallel to the area on which they act are
called shear stresses. In three dimensions, the total state of stress of an element inside of a
deformable body includes the nine stresses shown as arrows in Figure 2.1.
The state of stress may also be completely defined by the matrix shown in Eq. (2.1),
sometimes called the Cauchy stress tensor. The first subscript for each stress component
indicates the normal (perpendicular) direction of the plane on which the stress acts. For
instance, each of the stress components in the first row of the stress tensor, which have the
6
|
Virginia Tech
|
7
Figure 2.1: A three-dimensional element with all stress
components depicted.
first subscript of x, act on the face coming out of the page of Figure 2.1. In addition, the
second subscript of each stress component indicates the direction in which the stress acts.
Each of the stress components listed in the first column of the stress tensor are directed
parallel to the x-axis, as can be seen in Figure 2.1.
σ τ τ
xx xy xz
τ σ τ (2.1)
yx yy yz
τ τ σ
zx zy zz
By assuming that this element is in static equilibrium, it is clear that some components
in the stress tensor are redundant. Specifically: τ = τ , τ = τ , and τ = τ .
xy yx yz zy zx xz
Accounting for static equilibrium reduces the number of unique stress components from
nine to six. Therefore, the complete state of stress can be expressed by giving these six
components: σ , σ , σ , τ , τ , and τ .
xx yy zz xy yz zx
The magnitudes of these six stress components vary with the choice of axes orientation.
For every state of stress, there exists an orientation of mutually orthogonal axes, called
principal axes, in which the shear stresses are zero. The normal stresses oriented on the
principal axes are called principal normal stresses, and are often labeled σ , σ , and σ . The
1 2 3
first and third principal normal stresses, σ and σ , are the maximum and minimum normal
1 3
stresses acting in any direction, respectively. The second principal normal stress, σ , has
2
some intermediate value.
While the magnitudes of the stress components listed in Eq. (2.1) are directionally
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.