University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Virginia Tech
|
Chapter 4: Simulation and Analysis of Flow Through an Aggregate
Stockpile using PFC3D.
Brian Parker
Virginia Tech Mining and Minerals Engineering
100 Holden Hall
Blacksburg, VA 24060
Abstract
In many aggregate mines, understanding timing and flow of a stockpile is important for timed
sampling, process adjustments and overall stockpile safety. By developing a model that could
simulate particle flow through a pile it would be possible to address these issues in order to
reduce down time. This paper represents a simulation of aggregate sized particles through a
stockpile using circular particles in PFC3D. Results from the simulation show PFC3D was
capable of showing how stockpile particles move in all three dimensions while monitoring
specific particles within the pile. These specific particle results provided visual representation
on how particles located directly above feeder exit points tend to travel faster through a
discharging stockpile. Velocity graphs also showed how particles nearing the discharge point
accelerated in the direction of the discharge point before exiting the pile. In addition, the
velocity graphs contained evidence leading to the belief that bridging or “arching” could be
represented using a PFC3d simulation. While all results provided a better understanding in how
particles travel through a discharging stockpile, future testing is recommended. These future
simulations should include particles more accurately shaped along with a discharge point which
operates similar to that of an actual stockpile discharge feeder. Combining more detailed
simulations with more accurate results obtained from further RTD analyses should provide a
better representative look on actual stockpile flow.
32
|
Virginia Tech
|
1. Introduction
In the aggregate industry, stockpiling is the primary method of storing rock. Both
product and plant rock are stored in stockpiles. Stockpiles are used over other storage methods
because of the various required loading techniques and the ability to actually store the rock in
this fashion. Silos, and other storage devices, are an unnecessary added cost and are needed only
if the material to be stored is toxic, flows freely, or requires a special discharge technique. While
stockpile storage of aggregate material is preferred, little is known on particle flow through a
stockpile when drawn from beneath the pile. This is essential in obtaining proper samples and in
understanding how particles move through a stockpile.
While there is limited research on stockpiles and their internal flow, there is much
research involving quality control. Quality control is important in controlling product and
throughput for a plant. For example, stockpile performance numbers captured by a quality
management program were used to help make proper adjustments as shown in P. Keleher’s paper
(Keleher, Cameron et al. 1998). J.E. Everett (Everett and Kamperman 2000)discussed how
simple adjustments to discharge rates and pickup timing can improve mine life and quality of
product. Another example is seen in M.G. Nelson’s paper where push piles are used to blend
different grades of phosphate by correctly using a real-time analyzer(Nelson and Riddle 2004).
These are just a few examples of the research associated with blending and quality control.
Other research is usually associated with displacement of piles and the results associated
with displacement. B. A. Quinn (Quinn and Partridge 1995) had an article which more focused
on displacement of a stockpile due to fines. He performed lab tests to determine whether
excessive fines and lack of medium size particles can cause a decrease in stockpile slope
stability. Jenike & Johnson (Carson, Royal et al.) has done research focusing on segregation of
stockpiles leading to rat-holing and sifting. Segregation is an issue when loading onto aggregate
stockpiles due to the excessive amounts of fines that load near the center of the pile. Also,
overall safety and control of how particles enter and leave a pile are better understood with this
research.
All of the research available today shows the importance of stockpile management but
lacks the understanding of how particles flow within the pile. In addition, there is very little
experimental research associated with stockpile flow. Research that could prove the amount of
mixing occurring within the pile would allow for better understanding on what to expect during
discharge. In the case of aggregates, sampling the discharge of stockpiles is important for
quality control and when specific tests are needed to rate performance. Sometimes stockpile
adjustments can be very difficult to time and can skew data or cause excessive wait time in order
to assure a sample of a specific rock. There are also times where various changes in rock
33
|
Virginia Tech
|
hardness and size can affect a plant’s efficiency or cause damage to various parts of a plant. The
addition of experimental stockpile flow research would provide a better understanding of the
mixing and flow rate occurring within an aggregate stockpile.
Provided what we know today, a better understanding of how particles react upon
discharge of a stockpile would allow for better models and more efficient research. The purpose
of this paper is to discuss the results obtained from various particle flow code simulations.
Theses simulations will be used to show how particles flow through a stockpile and how
velocities vary within an aggregate stockpile during discharge. The results will also be
compared with results obtained from past experimental research in order to validate the model.
The results will open the door to future stockpile research and hopefully lead to simulations
capable of predicting flow through a stockpile.
2. Simulation of an Aggregate Stockpile
The simulation program that was used to model the flow of rock through an aggregate
stockpile was Particle Flow Code in Three Dimensions (PFC3D). The following section
describes how the program operates while also providing past research which used the program.
Past research was used as a basis for what type of particle flow program to use while also
assisting with how to design within the program.
2.1. Particle Flow Code (PFC)
The numerical “Particle Flow Code,” or PFC, models sphere particle movement and
interaction using the discrete element method. PFC can operate in either 2D or 3D. Calculations
in PFC are taken over a series of time steps. This makes it easy on the computer memory in that
dynamic equations are performed on every time step, rather than saving and reusing matrices as
in an implicit scheme. However, many thousand time steps are run, and calculations are done on
each particle for each step. Considering each step is usually a fraction of a second, the program
run time is rather lengthy. This can be an issue considering many applications require a large
amount of particles to properly simulate a system (L. Lorig, 1995; Itasca, 1995).
There are many papers that have used PFC in a similar manner as is used in this exercise.
Hamid Nazeri (2000) used PFC3D to simulate rock movement through an ore pass. His paper
chose to compare clustered particles to circular particles and compared these results to that of the
mine. It was determined in this paper that stresses were inconsistent with that of circular ore. A
similar paper followed the same premise of simulating rock in an ore pass using PFC2D (S.R.
34
|
Virginia Tech
|
2.2.3. Wall Development
While the stockpiles used as simulations in this exercise will mimic that of an aggregate
mine, the results hope to prove movement of rock through any type of stockpile can be simulated
using PFC. For this exercise, conical stockpile shells with heights of 10, 12 and 14 meters were
first developed in PFC that had angles of repose of 40 degrees for each cone, respectively. A 40
degree angle of repose was considered an acceptable stockpile shape when at full capacity (J.
May, 1991). A floor was then made to act as the ground using 4 individually created walls for
each conical shell design. Each of the 4 walls was placed so that a 2 meter x 2 meter or 2.5
meter x 2.5 meter opening was directly centered beneath the cone. Both of the openings were
simulated for each conical stockpile shell, resulting in 6 total analyses. Multiple stockpiles sizes
and discharge openings were evaluated in order to verify results. Each stockpile discharge was
covered to prevent particles from exiting the simulated pile early. All edges that met with other
edges were expanded half a meter to prevent leakage. A containment bin was then developed
below the cone to contain particles that will be exiting the pile later. Figure 4.1 shows the layout
of one of the conical particle containment units with capturing bin.
Figure 4.1 - Stockpile shell and capturing bin before particle loading.
36
|
Virginia Tech
|
2.2.4. Particle Design and Insertion
The particles used in this exercise were decided upon based on actual aggregate rock
sizes. A sample taken from the primary stockpile at Luckstone’s Bealeton quarry showed rocks
ranging from ultra-fine particles to boulders up to a foot in diameter. Because of the size of the
cone and the time required to run the program, particles ranged from 0.1 to 0.2 meters. While
this eliminated fines completely, it did represent the majority of the particles contained in an
actual aggregate primary stockpile. Plus, because the program tended to “freeze up” when trying
to use smaller particles, these particle sizes were decided upon.
Locating the particles properly into the cone required a code to fit as many particles as
possible. The code did this by filling in gaps with correctly sized particles until a set porosity
was met. In this case, an initial porosity of 40% was used due to computer limitations which
arose during lower porosity fill periods. While actual porosity numbers should be around 20%
(Masson, 1999), the initial 8,000 cycles were ran with the discharge area plugged in order to
decrease the porous space between the placed particles. Once particles had been located into a
general area at a set porosity, particles that were not located in the cone were removed. This
code resulted in a total quantity over 100,000 particles for each of the conical stockpile sizes.
Figure 4.2 presents an example of one of the modeled stockpiles as it appeared before
simulation.
Figure 4.2 - Stockpile loaded with particles before simulation.
37
|
Virginia Tech
|
2.2.5. Simulation Parameters
Parameters similar to those of actual conditions were used in the simulation of a
stockpile. The gravity was set to 9.81 m/s, which is consistent with earth’s gravity at ground
level. The wall’s normal stiffness was set high while its shear stiffness was set to zero
considering no friction between the walls and the particles is necessary. The particles were set to
have a normal stiffness of 1.0E6 N/m, a shear stiffness of 0.5E6 N/m, and a friction coefficient of
0.4. While these values were not determined for the particular aggregate rock being modeled, S.
Masson used similar values in his studies of rock flow through a silo. Rock in this case should
be consistent with the rock used in Masson’s paper.
2.2.6. Particle Histories
In order to know how a particle was moving through the stockpile during the simulation,
histories were kept for specified particles. These particles were selected based on location and
their importance to show the movement of the pile during discharge. While every particle can
have a history kept, only a select number of particles were selected. For stockpiles 10, 12 and 14
a total of 40, 45, and 49 particles, respectively, were monitored and had histories maintained
during each simulation. An even number of particles were monitored on each respective x and y
axis while a single particle was monitored at the top of the pile. Figure 4.3 shows the general
location of the particles in the pile that were monitored along the x-axis. The y-axis monitored
an identical particle distribution. The points in the figure represent the particles monitored for
the 12 meter high conical stockpile design.
Figure 4.3 - Monitored particle locations for the 12 meter stockpile analysis
38
|
Virginia Tech
|
Figure 4.5 - Image near end of simulation.
Figures 4.4 and 4.5 both show the effect of a pile as particles are being discharged.
Figure 4.4 shows the immediate effect at the top section of the pile. It appears that the particles
closest to the center of the pile are impacted immediately. Figure 4.5 shows that PFC is capable
of developing its own random stockpile, as seen with the discharged particles. In this case, it
appears that the original stockpile may have had a larger slope stability angle than what should
have been used. Comparing both figures, Figure 4.4 shows the heavy discharge of particles
while Figure 4.5 shows the slower discharge rate. In operation, the pile is usually recharged
while being discharged to keep a constant discharge flow. Considering that in the final figure the
majority of the particles had already passed through the discharge point, a lower flow volume
was expected.
2.3.1. Particle Displacement During Simulation
After over 1,500,000 time steps had been completed the history files were then inserted
into a spreadsheet to locate and monitor how the particles moved through the pile. X-Z and Y-Z
displacement graphs were developed to show the total particle flow through the pile. The
following figures show the particle locations of each of the particles of which histories were
retained for each of the stockpile simulations. These figures represent where the particles started
in the pile along with their movement through the pile during the simulations. A total of four
Figures are displayed for each stockpile size and discharge type; Two Figures representing the
path each particle traveled on the each axis (x and y axis) until the first particle passes through
the discharge point and two images representing the path each particle traveled on each axis
during the total simulation.
40
|
Virginia Tech
|
Figures 4.6 through 4.29 are all excellent visual examples of particle movement though a
discharging stockpile. All simulations show a steep inner discharge cone forming at the top of
the pile which over time expands outwards forming a wider, laterally shaped, discharge cone. It
can also be seen how particles on the outer edges remain stationary and are less affected by other
particle flow or by the initial particle movement due to the opening of the discharge point.
In addition to the general particle reactions during the simulation, the graphs which
represent the “displacement at discharge” show how the particles move during the initial periods
of the simulations. What is most noticed in these figures is how the particles closer to the center
of the stockpile, directly above the discharge point, have traveled a greater distance over the
short period of time in comparison to surrounding particles further away from the central
discharge point. This is accurate with visual observations of actual discharging stockpiles.
Perhaps the greatest finding that has been determined from each simulation, and as
represented in each of the previous graphs, is the consistency between each stockpile and
discharge size. While simulations on the larger stockpiles and smaller, 2.0 meter, discharge
opening took longer in comparison to the smaller stockpiles with larger openings, each stockpile
simulation produced similar particle movement results with consistent visual representations.
2.3.1. Particle Velocities During Simulation
In addition to the X-Z and Y-Z displacement graphs, velocity graphs were developed in
order to compare the change in velocities of selected particles during the complete simulation.
As can be seen in the following graphs, three particles were selected for monitoring for each
representative stockpile size and opening (12 meter/ 2.0 meter opening stockpile has no velocity
data). In each graph, the larger the circle size, the higher the velocity of that particle at that
respective particle location. In addition, the particle at the peak of the stockpile at the start of the
each respective simulation was graphed over the course of the simulation showing the change in
velocity based only on Z-axis location, or vertical placement, within the stockpile during the
simulation.
53
|
Virginia Tech
|
Figure 4.44 - instantaneous velocity during simulation at particle Z-axis location
14 m. stockpile / 2.5 m. discharge
Figures 4.30 through 4.44 all provide excellent visual representations of the change in
velocities over the course of the simulations. As seen for each “change in particle velocity for
three particles during simulation” graph, particles tend to travel through the stockpile at a
consistent velocity until they reach 1.5 to 3.0 meters from the discharge point. At this point, the
particle velocity increases as the particle begins exiting the pile with less surrounding
obstructions. Each respective graph provides similar results regarding general particle velocity
over the course of the simulation.
What can also be observed within each “change in particle velocity for three particles
during simulation” graph are instances where particles randomly increase velocity within the
center of the pile during simulation. While there is no way to determine the reason for this
occurrence, it is believed that the particles increase velocity suddenly over time due to the
formation of pockets within the stockpile. The increase in velocity is a result of the sudden
release of each particle from its stationary location. This pocketing occurrence, know as
“arching” or “Bridging,” occurs in stockpiles due to either particles locking together because the
stockpile is represented of large rocks in relation to the discharge size or because the particles
pack together at a point above the outlet, forming an obstruction (Maynard 2004). During each
61
|
Virginia Tech
|
simulation, either case could have existed as sudden velocity increase can be seen at various
locations along each pile for both 2.0 and 2.5 meter discharge sizes.
Each “instantaneous velocity during simulation at particle Z-axis location” graph
provides a more representative review of velocity as the peak particle travels closer to the
discharge point. These graphs show results similar to the “change in particle velocity for three
particles during simulation” graphs, however they also provide a value to the velocity of the peak
particle. As seen for each graph, the particles being tracked tend to increase in velocity between
2.0 and 3.0 meters above the pile discharge point. In many cases, the velocity reaches 2.0 meters
per second near the discharge location. This observation leads the belief that the particles are
actually beginning to free fall prior to discharging from the stockpile. While this may be an
accurate representation of round particles flowing through a large discharge point, it is believed
to be an inaccurate depiction of the velocity of rocks exiting a stockpile via a discharge point.
The trend as can be seen in each graph, however, may represent the actual trend in which rocks
travel while exiting a stockpile, only with a much lower velocity during the course of discharge.
3. Conclusions
After analyzing the graphs created from the data provided by the simulations, much can
be determined. The particle displacement graphs provided what was believed to be an accurate
representation of how particles moved though a discharging stockpile. Mainly, particles traveled
from their point of origin towards the discharge point in a fashion representative of what is
viewed of actual discharging stockpiles. Considering no known detailed study has yet to be
performed which traces detailed particle movement though a pile, a visual comparison is the best
known analysis for comparison. In addition, the particle displacement simulation results showed
that particles closer to the center of the stockpile, directly above the discharge point, traveled
faster though the pile than particles which began further down the outslopes of the stockpile.
These results even held true for particles which began at the peak of the pile, further away from
the discharge point than some of the particles further down the outslopes of the stockpile.
Overall, these findings are accurate with visual observations and are also accurate with findings
from the Residence Time Distribution (RTD) analysis discussed in Chapter 3.
Velocity graphs created from the simulation data also provided essential information
necessary in better understanding how particles travel within a discharging stockpile. As seen on
all velocity simulation graphs, particles tended to travel at a consistent velocity until 2.0 to 3.0
meters away from the discharge point. While the velocities recorded are believed to be in excess
of the actual velocities of rocks traveling through a discharging stockpile, the velocity trend is
believed to be accurate. Another key finding is what is believed to be “arching” within the
stockpile flow simulations. While the data collected from these simulations provides little
62
|
Virginia Tech
|
information as to why particles were suddenly increasing in velocity at various locations in the
pile, it provides promising information that may be essential to future studies.
Overall, the information collected from these simulations can be deemed an important
step in developing simulations capable of mapping particle flow consistent with actual
discharging rock stockpiles. Theoretically, the simulations provided accurate results consistent
with how stockpile are expected to react during discharge. In addition to accurate results, the
graphs proved the consistency of the simulations and the reliability of the PFC3D program.
While this data cannot directly be used to predict residence time within a stockpile or the
likelihood of arching within a stockpile, it does prove that PFC3D is capable of providing
accurate flow representations of discharging stockpiled material. Future analysis should include
particles more accurately shaped along with a discharge point which operates similar to that of
an actual stockpile discharge feeder. Combining these points into a simulation while also being
able to collect empirical data from a stockpile similar in size and design will allow for a direct
comparison to be made, possibly resulting in equations capable of predicting both flow and
safety issues which may be encountered while discharging from stockpiles of any shape.
63
|
Virginia Tech
|
Chapter 5: Final Summary and Conclusions
1. Summary
Stockpiles are used throughout mining over other storage methods because of the various
loading/unloading techniques and the ability to actually store the rock using this method. Silos,
and other storage devices, are an unnecessary added cost which is needed only if the material to
be stored is toxic, flows freely, or requires a special discharge technique. While there are many
obvious reasons to chose stockpiling over other aggregate rock storage techniques, there is little
known on how particles actually travel through a bottom discharging stockpile.
The research of today regarding stockpiles focuses on overall improvements regarding
blending and final product. Information pertaining to how a stockpile reacts during a
discharging procedure is lacking. Both horizontal and vertical movement has been expected to
exist in a pile, but exactly how the particles move during discharge has always been unknown.
This paper provides results regarding displacement and velocity to be expected in an actual
discharging stockpile.
While the results found in this paper are raw, they do prove the capability to both map
and predict displacement based on particle location in a stockpile. Increased information on
these findings can assist many in obtaining a better understanding on how stockpiles perform
during bottom discharge procedures. This information may lead to better sampling, safer
stockpiles and increased understanding on when to make proper process adjustments based on
incoming rock parameters.
2. Conclusions and Future Recommendations
Based on the results compiled from Chapter 3, many conclusions can be made. Overall,
it was determined that the aggregate stockpile discharges similar to that of a plug flow system. It
was determined that the introduction of rocks to the pile more influences a mixing of the pile
than the actual discharging of the rocks from the pile. During discharging, it was evident that the
“action” area of the pile resembled a plug flow system. The higher general Peclet numbers prove
that the system as whole tends to operation as a plug flow; this is especially evident when
outliers, or the particles which separated from the general center of the pile during conveyor
loading, were removed. While mean residence times were useful in understanding the general
residence time within each pile, they were not very useful in generating an overall understanding
of retention within the stockpile.
64
|
Virginia Tech
|
Results obtained from Chapter 4 prove information collected from aggregate stockpile
simulations prepared using PFC3d are capable of mapping particle flow consistent with actual
discharging rock stockpiles. Theoretically, the simulations provided accurate results consistent
with how stockpile are expected to react during discharge. In addition to accurate results, the
graphs prove the consistency of the simulations and the reliability of the PFC3D program. While
this data cannot directly be used to predict residence time within a stockpile or the likelihood of
arching within a stockpile, it does prove that PFC3D is capable of providing accurate flow
representations of discharging stockpiled material. Velocity results obtained from simulations
show results consistent with actual field results, as well as the ability for the simulations to show
issues regarding arching and other potential safety issues.
Collectively, Chapters 3 and 4 provide information essential in obtaining a better
understanding on how stockpiles discharge. Both proved that the area directly above the
discharge point known as the “active” area tends to travel through the pile quicker than other
areas and thus as a plug flow system. In addition, it should be noted that additional RTD
analyses would allow for a better simulation to be developed.
While much of this information is useful, there is much more research that could be
completed in order to obtain even more valuable data. As far as field, RTD, analyses it is
suggested that future physical flow research perform a larger number of RTD analyses in order
to develop a correlation between loading and discharge features. In addition, future research
should perform residence time distribution analyses on various different starting locations on the
pile. This information could lead to the development of rate equations based on stockpile height,
or volume. Future modeling and simulation research should include particles more accurately
shaped along with a discharge point settings which operate similar to that of an actual stockpile
discharge feeder. Combining these points into a simulation while also being able to collect
empirical data from a stockpile similar in size and design will allow for a direct comparison to be
made, possibly resulting in equations capable of predicting both flow and safety issues which
may be encountered while discharging from stockpiles of any shape. Increasing the number of
analyses and tuning the simulations to be more realistic to actual stockpiles has the capability to
provide a model capable of predicting flow information no matter where the particles location is
within the pile. Obtaining this type of information could revolutionize stockpiling and could
even save lives.
65
|
Virginia Tech
|
CHAPTER 1 INTRODUCTION
In U.S. underground coal mines, concrete block stoppings are used to control the
flow of air throughout the mine. Although stopping construction is commonplace in the
mining industry, little work has been done to fully understand the deformation behavior
and failure mechanisms of stoppings subjected to excessive roof to floor convergence.
Excessive convergence is especially prevalent in longwall panels, an area where
maintaining the structural integrity of stoppings is crucial in assuring the quality of
ventilating air currents.
Stoppings are used in underground mines primarily as a means of controlling
ventilation and failure of stoppings can lead to a variety of health and safety hazards.
The principle purpose of building stoppings is to form air courses that facilitate removal
of dust and methane gas from the mine. Failed stoppings may permit clean intake air to
escape the mine without reaching distant working faces, allowing a build up of methane
gas that could result in a fire or explosion. Stoppings also serve as barriers to separate air
courses, which create smoke-free entries that can be used as escapeways in case of fire.
A failed stopping may allow smoke to contaminate an otherwise clean airway and hinder
safe escape in the case of an emergency evacuation. A less catastrophic hazard that is
encountered more commonly is the potential for injuries resulting from lifting heavy
blocks. Stopping blocks are often heavy (40 to 55 lbs) and rebuilding damaged stoppings
exposes workers to a greater risk of musculoskeletal injuries from repetitive heavy lifting.
Since stoppings are primarily a ventilation control, they are built to withstand
certain levels of transverse loading due to air pressure. However, under normal
conditions, stoppings are at least as likely to fail due to vertical loading caused by roof to
floor convergence than from transverse loading. To date, little research has focused on
the way in which stoppings behave and fail when subjected to vertical loading and no
acceptable design procedure exists. Thus, in the absence of firm design guidelines, trial
and error techniques prevail. Because the performance of stoppings is complex and
difficult to analyze, numerical models were employed in this investigation.
1
|
Virginia Tech
|
CHAPTER 2 LITERATURE REVIEW
2.1. VENTILATION STOPPINGS
Proper ventilation of a mine is important to prevent fires and explosions and to
provide uncontaminated air for the health and safety of mine workers. One of the
primary means of controlling mine ventilation is through the use of temporary and
permanent ventilation stoppings. Temporary stoppings are generally constructed from
fire-resistant fabric, plastic, lumber, or sheet metal, while permanent stopping are usually
built using solid or hollow concrete blocks (Ramani 1992). Since they are typically in
place for extended periods of time, permanent concrete block stoppings are at a higher
risk of being damaged by roof to floor convergence and are the topic of this report. The
importance of intact stoppings can be clearly seen in the following example. In figure
2.1, clean air enters the mine through the intake entry and continues to the working faces.
After ventilating the faces, the return air splits and exits the mine through the return
entries. If any of the stoppings in this mine layout fail, the ventilation system will be
disrupted. For example, a stopping failure in one of the crosscuts between the intake and
return entries at the bottom of the diagram could cause the intake air to short circuit the
ventilation system and enter the return airway without ever reaching the working faces.
This would leave the faces without adequate ventilation, possibly resulting in a hazardous
concentration of methane and an atmosphere deplete of oxygen.
3
|
Virginia Tech
|
2.2. FAILURE CRITERIA
In studying the behavior of stoppings, it is necessary to test for failure so the
stability of the structure can be evaluated. Throughout this study, material failure was
evaluated with stress-strain curves produced during testing. In each of the physical tests
and in the numerical models, loads were applied to induce failure and maintained beyond
the ultimate load capacity of the materials to establish post-failure behavior. Goodman
(1989) calls this process generating a “complete stress-strain curve” for the material. In
such a curve, the ultimate strength of the sample occurs at the point of peak stress.
Beyond this point the sample begins to shed load and fail, displaying its post-failure
behavior. In looking at a specific point or discontinuity within a sample, the Mohr-
Coulomb failure criterion was used to assess the state of failure. Although other failure
criteria have been developed, the Mohr-Coulomb failure criterion was chosen because it
is commonly used in the study of soil and rock mechanics to determine the peak stress of
a material subjected to various confining stresses. The failure criterion consists of
normal, σ , and shear stress, τ, axes and a failure envelope just touching all possible
n
critical combinations of principle stresses, σ and σ (Goodman 1989). In figure 2.3,
1 3
which shows a typical failure envelope, the semi-circles represent principle stress
combinations; φ represents the angle of internal friction of the sample; C represents the
cohesion (or shear strength intercept); and T shows the tensile strength. A tensile limit,
T , is used when the tensile strength of the material is lower than the strength defined by
o
the failure criterion.
6
|
Virginia Tech
|
Using the Mohr-Coulomb failure criterion, it is possible to determine whether or
not failure will occur at a point within the model under certain stress conditions based on
the properties of the material at that point. Following a graphical procedure described by
Obert and Duvall (1967), the known vertical and horizontal stresses, σ and σ , can be
y x
plotted along the σ axis as shown in figure 2.4. The shear stress, τ , can then be plotted
n xy
along the τ axis, intersecting σ and σ . A line drawn between these two intersections
x y
will cross the σ axis at the point equal to one half the sum of σ and σ , which is the
n x y
center of the Mohr circle. The circle can be drawn using the two intersections as
diameter endpoints, as shown in figure 2.4. The points where the circle intersects the σ
n
axis are the major and minor principle stresses, σ and σ . In addition to the graphical
1 3
method presented here, σ , σ , and the maximum shear stress, τ , can be calculated
1 3 max
using the following equations:
σ +σ
σ = x y +τ (2.1)
1 2 max
σ +σ
σ = x y −τ (2.2)
3 2 max
2
σ −σ
τ = τ 2 + y x (2.3)
max xy 2
To show how the Mohr Circle can be used, an example is given in figure 2.5. In
this example, the material has an angle of internal friction of 32O and a cohesion of 450
psi. When a vertical load was initially applied to the sample, the horizontal stress, σ , was
x
10 psi, the vertical stress, σ , was 330 psi, and the shear stress, τ , was 10 psi. These
y xy
values were used to plot the small circle in figure 2.5, which is shown below the failure
criterion.
8
|
Virginia Tech
|
2.3. NUMERICAL MODELING
Numerical modeling is a tool used in the study of a variety of physical processes.
Typically, numerical methods are used to solve partial differential equations derived from
or for physical processes, such as heat transfer, stress and displacement, and fluid and
current flow. Relatively straightforward stress and displacement problems in science and
engineering are often solved analytically using equations of physics. More complex
problems with non-linear material properties can only be solved numerically (Heasley
2003). Because of the wide variety of problems to be solved using numerical modeling,
numerous codes have been developed, each of which is most well suited to solving
certain conditions. The number of available programs is quite large and selecting the
most appropriate one for a particular task is important.
2.3.1. NUMERICAL MODELING METHODS
Some of the most common numerical modeling methods used in science and
engineering applications today include: finite element, finite difference, discrete element,
and boundary element. Other methods exist, but these are currently the most commonly
used methods. Each method has a unique history and certain physical and numerical
conditions for which it is most appropriate.
Today, many commercially available programs use the finite element method.
This method was developed around 1960 (Reddy 1984) and is an implicit code based on
continuum mechanics. Using the finite element method, the area of interest is discretized
into some number of finite elements. The relevant equations are then solved for each
element and combined to generate an overall problem solution. The finite element
method, which includes some of the most flexible and sophisticated programs (Heasley
2003), is widely used to solve a variety of problems in science and engineering. The
method is versatile and examples of its use have been well documented over the past
forty years.
10
|
Virginia Tech
|
The finite difference method, originally used in the late 1970’s (Itasca, FLAC
Users 2000), is another technique with a relatively long history. The finite difference
method differs from the finite element method primarily in that it is an explicit method,
using an iterative scheme to solve the equations of motion for each element based on
stress and force values and a prescribed “difference” from neighboring elements (Itasca,
FLAC Theory 2000). This allows disturbances to propagate throughout the model over
many timesteps. Although the equations are derived differently, the finite element and
finite difference methods will produce the same results in most cases. In general, the
finite difference method is a more suitable choice when the process being modeled is
nonlinear and when large strains and physical instability will be encountered (Itasca,
FLAC Theory 2000). As these conditions often apply to rock masses, the finite
difference method is particularly well suited to modeling soil and rock mechanics
problems.
The discrete element method is a discontinuum code that can be used to model
multiple blocks, which make up a rock mass. This method allows finite displacement and
rotations of finite bodies, which are even allowed to detach completely from one another.
Additionally, the discrete element method recognizes new contacts automatically during
calculations (Itasca UDEC Users 2000). Some finite element and finite difference
programs include interface elements that allow the user to incorporate a few
discontinuities, but most of these break down when a large number of discontinuities are
encountered or when displacements along the discontinuities become too large (Itasca
UDEC Theory 2000). These characteristics make the discrete element method an ideal
code for modeling jointed rock masses.
11
|
Virginia Tech
|
Within the family of discrete element programs, there are four different types of
code that can be distinguished based on the way in which bodies and discontinuities are
represented. The distinct element method uses rigid or deformable bodies and contacts
are deformable. An explicit time-marching scheme is used to solve the equations of
motion and the calculations progress the same way as in the finite difference method. A
particle code, PFC2D (Itasca 1993), is also based on the distinct element method and is
used to model granular materials. Modal methods behave essentially the same way as the
distinct element method when the blocks are rigid. However, when blocks are
deformable, modal superposition is used to calculate stress and strain. This method is
well suited for loosely packed discontinua. In the discontinuous-deformation analysis, or
DDA, method, all contacts are rigid and bodies can be rigid or deformable. Because of
the requirement that contacts are rigid, the blocks are not permitted to penetrate each
other and an iterative scheme is used to achieve this condition. Block deformability in
the DDA method comes from superposition of strain modes, similar to the modal
methods. Finally, using the momentum-exchange methods, all contacts and bodies are
rigid and momentum is exchanged between two bodies during collision. Sliding along
discontinuities can be represented using this method, but not block deformation (Itasca
UDEC Theory 2000).
The fourth type of numerical modeling code, the boundary element method, is a
technique that has the advantage of only requiring the boundary to be discretized. This
reduces the amount of time and computer resources that must be used creating mesh and
running the models, assuming the problem situation allows a boundary-type analysis
(Crouch and Starfield 1983). This method is appropriate to geomechanical problems
where the ore deposit or seam is of interest and the surrounding rock can be considered
one material (Heasley 2003).
12
|
Virginia Tech
|
2.3.2 NUMERICAL MODELING METHOD SELECTION
Selection of numerical methods for this project was based primarily on software
availability and the applicability of each method to geotechnical scenarios. The initial
phase of this project involved modeling a single concrete block with a finite difference
program, FLAC v. 4.0 (Itasca 1986). For the second stage of the project, modeling a
wall, it was necessary to incorporate many intersecting discontinuities into the model.
Although it is possible to incorporate discontinuities into a FLAC model, creating a
model with the number of discontinuities that are present in a wall is cumbersome and
can create instabilities within the program. Choi (1992) states that, although it may be
possible to incorporate a limited number of discontinuities into a continuum model, this
type of model is not suitable for modeling a rock mass where large scale sliding,
separation, and rotation will occur along discontinuities. For this reason, a discrete
element program was chosen for modeling the wall. The distinct element program,
UDEC v. 3.1 (Itasca 1985), was an appropriate choice because of its similarity to FLAC
and because of its ability to model deformable blocks and deformable contacts. Both
software packages, FLAC and UDEC, have several built in constitutive models that are
representative of geologic material, making them ideal for mining and other geotechnical
applications.
13
|
Virginia Tech
|
2.3.3 MODELING METHODOLOGY
Starfield and Cundall (1988) suggest a series of guidelines for modeling that can
be applied to all rock engineering projects. The first step in developing a model should
be to clearly define the reason for building a model and what questions the model will
answer. Next, a model should be used as early as possible in the project. Starfield and
Cundall suggest building a conceptual model even before field data is collected so that
field tests can be designed well. The third guideline is to look at the mechanics of the
problem to ensure that the model is not only producing an output similar to field data, but
that the correct modes of deformation and failure are being simulated. Starfield and
Cundall suggest using simple numerical experiments to eliminate conflicting ideas about
behavior observed during field tests. It is important to design the simplest model possible
without neglecting the most important mechanisms and run this model. If a simple model
behaves as expected, more complex models can follow, but the simplest models should
be used to identify weaknesses in the concept or model being tested. Finally, Starfield
and Cundall suggest that if the only available model has weaknesses that cannot be
remedied, a series of simulations should be conducted that will bracket the actual
behavior seen in physical studies. These guidelines are especially useful in a rock
mechanics problem where the properties of all the materials involved are often unknown.
2.3.4 MODEL FEATURES
During the process of model development, two special features available in
UDEC were used to simulate conditions observed in the physical testing, the cell space
mapping configuration and the Voronoi tesselation generator. These features have been
used to simulate a limited number of conditions and have been described by Itasca and
other users of the software.
When UDEC is run in its standard configuration, a rock mass is represented by a
collection of distinct blocks. These blocks are created by first defining a mass and then
placing connecting cracks throughout the mass to break it into any number of smaller
blocks. UDEC recognizes the blocks and their relative locations by tracking the contact
coordinates as the model runs. Additionally, UDEC stores a network of “domains,”
which are the spaces between contacts and these domains are constantly updated.
14
|
Virginia Tech
|
The default method does not work well for a system in which blocks are likely to
move apart from each other because as the domains get large, they become poorly
defined. In order to remedy this problem, an alternative configuration exists within
UDEC, known as “cell space mapping.” In the cell space mapping configuration, each
block is defined individually, permitting spaces to exist between blocks. Hart (1993)
explains that with this configuration, each block is assigned an “envelope space,” which
is defined as the smallest rectangular area with sides parallel to the coordinate axes into
which the block will fit. The envelope space for each block is then mapped into a
rectangular grid of cells. UDEC is equipped with a search function that allows it to
identify the location of each block and the neighboring blocks based on which cells
contain each block’s envelope space. This configuration was developed as a means of
allowing blocks to detach completely from one another and to bounce, such as blocks
rolling and bouncing down a slope (Itasca, UDEC Theory 2000).
Another special feature available in UDEC that was used in this project is the
Voronoi tessellation generator. According to the UDEC User’s Guide (2000), the
Voronoi generator is used to subdivide blocks into randomly-sized polygons with an
average edge length that is user-defined. Thus, the size and number of blocks can be
assigned as well as the degree of uniformity, which is specified by means of a variable
iteration number.
One use of the Voronoi generator is to simulate progressive cracking within a
material. Lorig and Cundall (1987) explain how this feature can be applied to concrete.
They have used the Voronoi generator to break a concrete beam into many randomly-
sized blocks. These Voronoi cracks are then assigned properties such that they represent
pre-existing bonds between elements. As the beam is loaded, the Voronoi cracks will
fail, leaving the elements intact, in much the same way that cracks form and propagate
through concrete subjected to compressive loading. The authors note that no attempt is
made to model the exact shape, size, or location of particles within concrete. Rather, they
found that failure of a group of randomly placed Voronoi cracks closely resembled the
type of failure patterns observed in physical specimens.
15
|
Virginia Tech
|
2.4 CONCRETE STUDIES
Although little effort has been made to study the behavior of concrete block
stoppings, concrete has been studied extensively outside of the mining industry. Most of
this work has been done within the civil engineering industry. Studies have focused on
the failure mechanism of concrete, primarily on a small scale such as local crack
propagation, and to a lesser extent on a larger scale. A few studies have looked at the
effects of surface imperfections on concrete performance. Numerical modeling has been
used as a tool to simulate the behavior of concrete and study the failure of concrete.
However, much of this work is not applicable to the study of dry-stacked concrete block
walls because of the use of mortar and reinforced concrete in the majority of the studies.
Although it is difficult to make a direct comparison between studies conducted with
different conditions, the knowledge gained from some of these studies can be applied to
concrete block stoppings.
The study of the failure mechanisms of concrete encompasses both local fracture
propagation and large scale failure of walls and other structures. According to Mohamed
and Hansen (1999), “the macroscopic mechanical behavior of concrete materials is
directly related to fracture processes at the microlevel.” For this reason, many studies
have focused on understanding the propagation of cracks through the concrete at a
particle level. Yip et al. (1995) describe this process as the growth of microcracks which
originate from inherent localized defects in the concrete structure. As a compressive load
is applied to the material, tensile stresses develop in the area of these defects and result in
the formation of microcracks. These microcracks then make up the next set of defects
from which more cracks will develop as the load on the material increases. Rossi, Ulm,
and Hachi (1996) describe a similar process, but explain that the “defects” are actually
the effect of the existence of hard and weak points throughout the material, resulting from
the heterogeneity of the material. Concrete is described as a two-phase material,
comprising aggregates and cement paste, which do not have the same stiffness, resulting
in local variations in material strength. Vonk et al. (1991) take the process further and
explain that cracks begin specifically at the interfaces between cement-paste and
aggregates.
16
|
Virginia Tech
|
Although most discussion of concrete failure mechanisms consists of small-scale
studies of crack propagation, a limited amount of work has been done on a larger scale.
Since the vast majority of masonry includes mortar, the large-scale studies of failure
mechanism generally include mortar as well. One study involving testing of laboratory-
scale masonry walls was specifically designed to test the interaction between bricks and
mortar. In this test, the stress induced by loading on the wall induced tensile cracking
“from the existing microdefects in the mortar or at the mortar/brick interface through the
mortar beds, debonding or breaking the bricks” (Guinea et al. 2000). In a study on the
failure mechanisms of mortared masonry panels, Andreaus (1996) decribes a number of
potential failure modes for the panels depending on the stress state in the joints. The
failure mechanisms presented in this study range from failure only in the joints to various
combinations of mortar and block failure.
Few studies have specifically focused on the effects of surface irregularities on
block strength and performance. However, at least two studies have dealt with the issue
peripherally. In testing reinforced concrete panels, Thienel and Shah (1997) note that the
surface quality of a panel could influence the shape of the stress-strain curve or the
location of cracks. Testing of four samples, two with ground surfaces and two with
original surfaces showed a slight increase in panel strength for the ground blocks, but the
stress-strain curves were similar. In this particular test, the surface quality was only of
minimal importance. However, the tests were conducted on solid panels and the number
of surfaces in contact with each other was greatly reduced compared to testing a wall
made up of many blocks. In the laboratory-scale wall tests mentioned in the previous
section, each block was cut with a diamond saw and ground to guarantee planar surfaces
(Guinea et al. 2000). In this case, the authors were concerned about the influence of
surface irregularities despite the use of mortar in the joints.
17
|
Virginia Tech
|
Some work has been done using numerical models to study the behavior of
concrete. Since concrete is prone to cracking, these models focus primarily on the study
of crack formation. Schlagen and van Mier (1992) observe that in the study of crack
formation in a non-homogenous material such as concrete, the best simulation would be
one on an atomic level. Despite advances in computing power in recent years, a particle
analysis on an atomic level would require a substantial amount of time and computer
resources, particularly if one were to model a structure of significant size. Instead,
Schlagen and van Mier (1992) suggest a lattice model in which a mesh is constructed and
projected onto a generated concrete grain structure. Each element in the mesh can then
be assigned material properties for either aggregate or mortar. Rossi, Ulm, and Hachi
(1996) propose a probabilistic model, which is similar except that each element is
assigned material properties based on the probability of any given particle having a
certain set of properties. Another model is presented by Nagai et al. (2002) in which a
Voronoi diagram is used to introduce a random geometry to the model. Each element
within the model then represents either aggregate or mortar and material properties are
assigned based on a statistical method similar to that used in the previous example. Like
the previous examples, this method is used to study crack formation in very small
samples (approximately 4” x 8”).
Another type of model that has been used to study cracking on a small scale is the
fictitious crack model. Prasad and Krishnamoorthy (2002) describe the fictitious crack
model as a means of studying the local behavior in the vicinity of a crack. This type of
model is particularly well-suited to studying the state of stress of a material at the tip of a
crack as it propagates through the material and has been incorporated into both a finite
element analysis and a boundary element analysis (Aliabadi and Saleh 2002).
Although any of these methods can be useful for studying the mechanics of crack
formation on a small scale, there are some drawbacks to using such “micro-mechanical”
models. Xu and Pietruszczak (1997) point out that although micro-mechanical models
are physically appealing in that they incorporate the mechanical characteristics of the
material (such as size and shape of the aggregate) there are great difficulties in obtaining
accurate material properties and in using these models for structural analysis.
18
|
Virginia Tech
|
Some work has been done using numerical modeling to study full-scale concrete
block walls, or masonry panels, but nearly all of this work has been limited to mortared
walls. Dialer (1992) used a distinct element model to simulate the behavior of masonry
panels (mortared concrete block walls) subjected to uniaxial compressive loading.
Although he was able to accurately simulate the behavior observed in laboratory testing,
the failure was confined to the mortar joints between the blocks. Andreaus (1996)
conducted a similar study, producing a collection of data in which all wall failure
occurred in the mortared joints. Lorig and Cundall (1987) did a unique study in which
the Voronoi generator in UDEC was used to model failure of a concrete beam. In this
study, the concrete was modeled as an elastic material and failure along the random
pattern of Voronoi joints was used to represent tensile cracking throughout the concrete.
This is one example of numerical modeling used to simulate cracking on a large scale
that can be applied to a structural analysis and is similar to the method applied in this
report.
19
|
Virginia Tech
|
Figure 3.1 Mine Roof Simulator During Longwall Shield Testing
Numerous block types are commercially available for use in mine stoppings,
many of which are specially designed to be lightweight. A standard CMU (concrete
masonry unit) block was chosen for testing in this project. As shipped from the
manufacturer, the blocks have the same nominal dimensions (15.5” x 5.5” x 7.5”),
although small variations are expected because of the process used to form the blocks.
Three standard CMU blocks were measured and weighed and the results are given in
table 3.1. The tests were carried out by lowering the upper platen until it made contact
with the block. The lower platen was then raised at a constant velocity of 0.1 inches per
minute. The load applied by the platen was measured and the displacement increased
until failure of the block occurred. The resulting load-displacement curves for the three
blocks are shown in figure 3.2.
In order to consider the results of a larger number of block samples, data was
collected from three additional tests sets. Each set of blocks was tested using the
procedure described above and the blocks within each group were obtained from the
same manufacturer. These tests produced load-displacement curves for sixteen blocks, as
shown in figure 3.3.
21
|
Virginia Tech
|
300
270
240
210
180
150
120
90
60
30
0
0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 0.225 0.250
Vertical Displacement (in)
)spik(
daoL
lacitreV
Block Set 1
Block Set 2
Block Set 3
Block Set 4
Figure 3.3 Load-Displacement Curves Resulting From Single Block Tests in MRS
The data shown in figure 3.3 demonstrate the range of behavior exhibited by
concrete blocks from different manufacturers and also within each group. These
differences make it difficult to generate a summary curve. If all the curves were
combined to form an “average” curve, it would likely not represent realistic block
behavior. Thus, it was necessary to consider each curve and select a single curve to be
used in calibrating the model.
Block sets one and two were selected from two different block manufacturers that
supply materials to the Lake Lynn Laboratory and the Safety Research Coal Mine, both
located at the NIOSH Pittsburgh Research Laboratory. The blocks tested in set three
were from the palette of blocks used in the seven course wall test described in chapter
four (section 4.2.2.). The blocks selected for set four were prepared before testing by
grinding the top and bottom faces. This eliminates any variations in test results due to
imperfections in the block surfaces.
23
|
Virginia Tech
|
The collection of test results shown in figure 3.3 indicates that imperfections in
the block surfaces result in a lowered block strength. In selecting a curve to be used in
calibrating a single block model, it was important to take this behavior into account.
Since the numerical model simulates a “perfect” block with parallel sides and no surface
imperfections, the data from block set four was selected for use in model calibration.
In addition to the load-displacement curves shown in figure 3.3, information
about the failure mechanism was obtained by observation. It was noted that initial
failures appeared in the block corners and propagated toward the center, eventually
leading to complete destruction of the block.
3.2. MODEL TYPE
The first model that was constructed was a single concrete block between two
steel platens. There are many parameters that dictate the behavior of this type of model.
The most basic of these parameters is the constitutive model chosen to represent the
material. FLAC version 4.0, which was used to create the single block model, has ten
built-in constitutive material models: null, isotropic elastic, transversely isotropic elastic,
Drucker-Prager, Mohr-Coulomb, ubiquitous-joint, strain-hardening/softening, bilinear
strain-hardening/softening ubiquitous-joint, double-yield, and modified Cam-clay. The
simplest constitutive model is the elastic model. However, elastic models have limited
application for studying a material, such as a concrete block, that is expected to fail
during testing. Therefore, the initial block model was built using a Mohr-Coulomb
material, which exhibits an elastic-plastic behavior. Since the steel platens in the model
were not expected to fail, an isotropic elastic material was chosen to represent the platen
behavior.
24
|
Virginia Tech
|
3.3.5. DILATION ANGLE
Dilatancy is a measure of the change in volume that occurs when shear stress is
applied to a material. This change is characterized by a dilation angle, ψ, which is the
ratio of volume change to shear strain. Vermeer and de Borst (1984) found 12O to be a
typical dilation angle for concrete and this value was used in the single block model.
3.3.6. TENSILE STRENGTH
By default, within FLAC, a value is calculated for tensile strength based on the
Mohr-Coulomb failure criterion as shown in figure 3.5. A value of 1837 psi was
calculated within FLAC using the following equation:
c
T = (3.4)
tanφ
Where: T = tension limit
c = cohesion
φ = angle of internal friction
However, the tensile strength of concrete is substantially lower than its compressive
strength. Drysdale and Hamid (1984) record tensile strengths as low as 100 psi for
ordinary concrete, so this value was selected for the analysis.
3.3.7. INTERFACE PROPERTIES
The coefficient of friction, cohesion, and stiffness of the interfaces between the
steel platens and concrete block have an influence on the behavior of the model and had
to be defined. Some tests conducted with CMU blocks similar to those used in the tests
have shown the coefficient of friction to be 0.3 between steel and concrete and 0.5
between two concrete blocks (Gearhart 2003). This corresponds to a friction angle of 17O
for the black-platen interfaces and 26.5O for the concrete block interfaces. Since the
other required values could not be readily measured, initial values were estimated. The
assumed values were 500 psi for cohesion and 1x107 psi/in for normal and shear stiffness.
29
|
Virginia Tech
|
Figure 3.8 Stress-Strain Curve for Strain-Softening Block Model in FLAC
One challenge in using a two-dimensional model to simulate behavior that occurs
in three dimensions is adjusting the strength of the material. The model created in FLAC
is a plane-strain model, in which the block is considered to have an infinite depth. The
physical block, on the other hand, has a depth of six inches. Thus, the peak stress in the
model should be higher than the peak stress in the physical block due to the effects of
confinement. The block in the model has two confined edges and only two free edges,
compared to the physical block with four free edges. The stress-strain curve in figure 3.8
shows that the block model yielded at a peak stress of approximately 4100 psi. The
average peak stress reached in the three MRS block tests was approximately 3100 psi.
Thus, the peak stress on the block in the model was about 25% higher than the peak stress
in the MRS test. The model output is shown in figure 3.9 with the data from the MRS.
34
|
Virginia Tech
|
The stress-strain curve produced when the model was run is shown in figure 4.2.
Following about 2.5 inches of displacement, a number of blocks along the left and right
edges of the wall had begun to separate from each other such that the model would no
longer run. However, at this point, the wall had reached a stress of more than 5000 psi,
which is substantially higher than the peak stress of the full-scale physical wall model.
Additionally, the failure mechanism of the wall in the numerical simulation was very
different from the physical test. In the numerical model, the mode of failure was similar
to the type of failure seen in a single block. Initially, shear failures appeared in the
corners of the wall and later extended toward the center as shown in figure 4.3.
However, the physical test in the Mine Roof Simulator showed the wall failing primarily
as a result of tensile cracks throughout the wall. A closer examination of the results from
several physical tests suggested that differences in block height between adjacent blocks
might have caused tensile cracking. These tensile cracks appear to have eventually
propagated throughout the wall, causing the wall to fail prematurely. One example of
this is shown in figure 4.4, where the difference in height between two adjacent blocks
seems to have created a stress concentration in the block above, generating a tensile
crack.
39
|
Virginia Tech
|
4.2.1. FOUR-BLOCK MODEL
In order to more closely study the phenomenon of tensile cracking brought about
by block height variations, a small test of four blocks was designed. Four standard CMU
blocks were selected and placed together as shown in figure 4.5. The dimensions of these
blocks are given in table 4.2. In order to monitor the effects of block size variations,
adjacent blocks (numbered twelve and three) were selected with a height difference of
approximately 1/16” at the front corner and 3/32” at the back corner. This physical
model was then tested in the Mine Roof Simulator by lowering the upper platen just to
the top of the upper block and raising the lower platen at a velocity of 0.1 in/min.
8
12 3
1
Figure 4.5 Arrangement of Blocks in Four-Block Model
Table 4.2 Corner Measurements of Blocks in Four-Block Model
Block Corner Height (in)
Number Left - Front Right - Front Left - Rear Right - Rear
1 7 9/16 7 35/64 7 9/16 7 37/64
3 7 17/32 7 19/32 7 17/32 7 19/32
8 7 33/64 7 9/16 7 17/32 7 37/64
12 7 9/16 7 19/32 7 37/64 7 5/8
42
|
Virginia Tech
|
When a load was applied to the blocks, the first crack to form was in the upper
block above the interface between blocks three and twelve, as shown in figure 4.6. The
location of this initial crack does support the hypothesis. This crack then extended into
blocks twelve and one, while a crack formed in the top of block three, directly below the
corner of block eight. Finally, the rear right corners of both block twelve and block one
began to separate from the blocks as shown in figure 4.7 and the test was stopped. It is
important to observe that the highest corner on block twelve was the right rear corner
where the severe cracking and separation occurred and the highest corner on block one
was also the corner where the most severe failure in that block occurred. Observation of
these failures and the variations in block height indicated that even differences in block
height as small as 1/16” may have played a significant role in determining when and how
block failure occurred.
Trace of initial
crack
Figure 4.6 Post-Failure Front View of Blocks During the Four-Block Test
43
|
Virginia Tech
|
4.2.2.1. PHYSICAL TESTING
For this test, a larger wall was needed that would contain multiple block
interfaces. The blocks used for the second test were standard CMU blocks similar to
those used in the first test. Each block was measured at all four corners (table 4.3) and
the blocks were numbered. A wall was then constructed three blocks wide and seven
courses high in the Mine Roof Simulator, as shown in figure 4.8. The dimensions of this
wall were chosen primarily because the wall was tall enough to be easily viewed in the
MRS, but narrow enough to limit the number of blocks required for wall construction.
Since the strength of the wall was not the primary concern, the exact wall dimensions
were insignificant. Rather, the test was designed to study the failure mechanism and
interactions between blocks. This wall, containing twenty-one blocks, had a sufficient
number of interfaces to study the failure mechanism.
Figure 4.8 Front View of Wall at the Beginning of the Seven-Course Test
45
|
Virginia Tech
|
Table 4.3 Corner Measurements of Blocks in Seven-Course Wall
Block Corner Height (in)
Number Left - Front Right - Front Left - Rear Right - Rear
2 7 17/32 7 1/2 7 9/16 7 17/32
6 7 37/64 7 1/2 7 37/64 7 9/16
7 7 35/64 7 9/16 7 17/32 7 9/16
13 7 19/32 7 19/32 7 9/16 7 37/64
14 7 39/64 7 35/64 7 19/32 7 35/64
15 7 43/64 7 39/64 7 43/64 7 19/32
16 7 21/32 7 39/64 7 19/32 7 35/64
17 7 9/16 7 19/32 7 9/16 7 39/64
18 7 5/8 7 5/8 7 45/64 7 5/8
19 7 39/64 7 5/8 7 37/64 7 39/64
20 7 5/8 7 17/32 7 41/64 7 37/64
21 7 41/64 7 41/64 7 41/64 7 41/64
22 7 5/8 7 39/64 7 41/64 7 5/8
23 7 19/32 7 39/64 7 39/64 7 39/64
24 7 5/8 7 9/16 7 37/64 7 17/32
25 7 5/8 7 35/64 7 19/32 7 9/16
26 7 35/64 7 19/32 7 9/16 7 37/64
27 7 35/64 7 19/32 7 9/16 7 19/32
28 7 9/16 7 19/32 7 33/64 7 37/64
29 7 19/32 7 5/8 7 5/8 7 5/8
30 7 19/32 7 39/64 7 39/64 7 39/64
The test procedure followed was the same as the first test, in which the upper
platen was lowered to the height of the top of the wall and the lower platen was raised at
a rate of 0.1 in/min. The initial platen movement caused small gaps in the wall to close
and allowed the top row of blocks to come into full contact with the upper platen. Once
the load was being fully transferred through the wall, cracks began to form. The first
cracks appeared in the top of block seventeen and the top of block nineteen. The crack in
block nineteen then propagated into block twenty-one, as shown by the crack traces in
figure 4.9. As the test progressed, cracks began to form in blocks twenty-five, thirteen,
eighteen, twenty-three, and twenty-two. The wall reached its peak load of approximately
200,000 lb at a displacement of about 0.45 in. The load was maintained beyond failure to
a displacement of 1.25 in, at which point the wall began to break apart. The wall at the
point when the test was stopped is shown in figure 4.10 and the load-displacement curve
produced during the test is given in figure 4.11.
46
|
Virginia Tech
|
NIOSH SAFETY STRUCTURES TESTING LABORATORY
250
200
150
100
50
0
0.00 0.20 0.40 0.60 0.80 1.00 1.20 1.40
Displacement (in)
)spik(
daoL
troppuS
Figure 4.11 Load-Displacement Curve Resulting from the Seven-Course Test
Detailed observation of the wall as failure occurred and post-failure provided
more insight into the failure mechanisms controlling the wall behavior. First, nearly all
of the initial cracks that formed in the wall appeared above or below a vertical joint and
in most cases the heights of the blocks on either side of the joint differed by at least
1/32”. Second, most of the cracking occurred in the top half of the wall and the bottom
row of blocks was the nearly unaffected by the loading. This may be due to the fact that
the bottom row of blocks was the only one resting on a perfectly flat surface, minimizing
the effects of block height variations. Finally, there was no indication of the type of shear
failures that were seen in the individual blocks and in the initial numerical wall model.
Rather, the tensile cracking seems to have caused the wall to fail before the loading
became great enough to generate shear failure.
48
|
Virginia Tech
|
Figure 4.17 Concentration of Vertical Stress in Lower Edge of Voronoi Model in UDEC
When the revised Voronoi wall model was run, it was confirmed that turning off
the damping successfully eliminated the problem of the vertical stress concentrating at
the lower edge of the wall. Figure 4.18 shows the vertical stress distributed throughout
the wall. The stress-strain curve associated with this model, given in figure 4.19, initially
showed an oscillating behavior. This was due to the damping being turned off. With no
damping, stresses were able to propagate freely through the model and more time was
required to reach a state of equilibrium. Since a velocity was being constantly applied to
the lower platen, the model did not return to a state of equilibrium once the velocity was
applied. However, as the model ran, the trend of the stress-strain curve became similar to
the stress-strain curves observed in previous models and in the physical tests, as seen in
figure 4.20. When compared to the stress-strain curves from the first two wall models, as
in figure 4.21, the curve for the Voronoi wall showed a substantially lower average peak
stress and a lower stiffness. This behavior was expected since the wall was filled with
cracks. As failure occurred along these cracks, the strength of the wall decreased, leading
to premature failure.
57
|
Virginia Tech
|
CHAPTER 5 CONCLUSIONS AND FURTHER WORK
Through testing of physical models and development of numerical models, many
observations have been made regarding the behavior and failure of single concrete blocks
and a variety of concrete block walls. From these observations, conclusions have been
drawn that have led to an increased understanding of the way in which concrete block
stoppings behave when subjected to vertical loading. The observations include the
following:
1. When a single CMU block was subjected to vertical loading, failure occurred at
approximately 3100 psi and the failure mechanism consisted of shear failure
zones developing in the block corners and propagating toward the center.
2. A strain-softening single block model in FLAC demonstrated the same failure
mechanism as the physical tests and had a peak stress of approximately 4100 psi.
3. A full-scale wall constructed of CMU blocks and subjected to vertical loading
failed at a peak stress of approximately 650 psi and failure was due primarily to
tensile cracks, most of which began at vertical interfaces between blocks.
4. In a four-block physical model the most severe damage to each block occurred at
the highest corner and the initial cracking occurred immediately above the vertical
joint between two adjacent blocks with a height difference of 1/16”.
5. In a seven-course physical model failure in the wall generally began above or
below vertical joints separating blocks with a height difference of at least 1/32”
before propagating into neighboring blocks. Additionally, most of the failure
occurs in the upper half of the wall.
6. A wall model with uniformly sized blocks exhibited a failure mechanism and
strength similar to that seen in a single block during physical testing.
7. A solid wall model with the dimensions of the seven-course physical model failed
at a peak stress of about 1100 psi, while the same wall with vertical and
horizontal joints (such as those between blocks) failed at a peak stress of
approximately 650 psi, a decrease in strength of about 40%.
67
|
Virginia Tech
|
8. The cell space mapping configuration was used to simulate non-uniform blocks
within the wall and doing so reduced the peak stress of the wall from 600 psi to
250 psi, a decrease in strength of more than 50%. Localized material failures
were observed scattered throughout the wall initially, but ultimately shear failures
appeared in the wall corners and propagated toward the center, similar to the
failure mechanism seen in single blocks.
9. The Voronoi tessellation generator was used to create a random pattern of cracks
such that failure along these cracks simulated tensile cracking within the concrete.
Incorporating Voronoi cracks into the wall substantially reduced the strength and
stiffness of the wall and was severely overdamped using the default configuration.
10. When the Voronoi generator and cell space mapping configuration were
combined, the Voronoi cracks appeared to have no (or little) effect on model
behavior.
From analysis of these observations, the following conclusions can be made about the
behavior and failure mechanism of concrete block stoppings:
1. The strain-softening single block model is representative of the behavior of a
single block tested in the Mine Roof Simulator. The failure mechanism of the
model is nearly identical to that which was observed in the physical test and the
peak strength of the model is approximately twenty five percent higher in the
model than in the physical test. This is explained by the increased in confinement
seen in the model, which is a two-dimensional representation of a three-
dimensional reality.
2. Testing of the four-block physical model in the Mine Roof Simulator indicated
that even very small height differences between adjacent blocks can lead to a
stress concentration and localized failure.
3. The hypothesized failure mechanism was further confirmed by the failure seen in
the seven-course wall test. The majority of the failure may have occurred in the
upper half of the wall because the lower rows of blocks were seated on a perfectly
flat surface, minimizing the effects of block height variations.
68
|
Virginia Tech
|
4. The initial wall model, which behaved much like a single block, further supports
the failure mechanism hypothesis. Since the model was constructed with
perfectly uniform blocks set in perfect alignment, there was no mechanism
present to generate stress concentrations within the wall or to initiate the
formation of cracks within the wall. Rather, the only reduction in strength of the
wall (compared to a solid block of comparable size) would be due to the presence
of joints between blocks. This was confirmed in a comparison between the solid
wall and the block wall.
5. Incorporating non-uniformly sized blocks into the wall caused the stresses in the
wall to concentrate in several locations, resulting in localized failure early in the
model history. However, there was no mechanism in the wall for these stress
concentrations to be relieved and the stress continued to build in the wall,
eventually causing failure in a manner similar to that of a block.
6. Incorporating the Voronoi generator did result in simulating tensile cracking
within the wall despite a larger than expected decrease in both peak stress and
stiffness and problems with damping.
7. The problems encountered in combining the Voronoi generator and the cell space
mapping configuration may have been due to software limitations as both
features, particularly the cell space mapping configuration, have limited uses and
may have never been used in combination. Since it is not possible, at this time, to
model variations in block size and tensile cracking simultaneously, it must be
inferred that combining both aspects of the failure mechanism into a single model
would result in a peak stress even further reduced.
69
|
Virginia Tech
|
The primary purpose of this work was to study the behavior, and particularly the
failure mechanism of concrete block stoppings. Although a fully-functioning model
capable of simulating the entire failure mechanism and producing a matching stress-strain
curve was not created, a greater understanding of the failure mechanism was obtained.
The behavior and failure mechanism of a single block was previously understood because
of the amount of work that has been done studying concrete, primarily in the area of civil
engineering applications. However, the importance of variations in block size and shape
in unmortared concrete block walls has not been previously acknowledged. This report
has shown the amount of variation that exists in the type of blocks typically used in
stopping construction and the dramatic effects these variations can have on the
performance of stoppings.
Although the models presented here are not suitable for use as design tools, an
increased understanding of the way in which stoppings fail when subjected to vertical
loading is crucial in the design of better concrete block stoppings. There are many
possibilities for future work that can evolve from this initial study. Software
improvements or the use of more complex modeling tools may lead to a model capable of
simulating both the stress concentrations created by non-uniform blocks and cracking
along pre-existing bonds that occurs as a result of those stress concentrations.
Additionally, models could be created to represent other stopping construction materials
such as deformable layers or blocks that are specially designed to be lightweight.
Ultimately, future work should focus on studying the behavior of stoppings and stopping
materials with the goal of developing empirical relationships among stopping design
parameters. This will provide a design tool that is much better than the current trial and
error methods used in stopping design.
70
|
Virginia Tech
|
LISA M. BURKE
Education
• B.S. Mining Engineering, May 2001, Virginia Polytechnic Institute and State
University
• M.S. Mining Engineering, May 2003, Virginia Polytechnic Institute & State
University
Experience
• National Institute for Occupational Safety and Health (NIOSH), Pittsburgh, PA,
June 2002-Present
o SCEP Mining Engineer
o Numerical modeling of concrete block stoppings
• Virginia Polytechnic Institute and State University, Blacksburg, VA, May 2001-
May 2002
o Graduate Research Assistant
o Develop system for classification of microseismic events based on
magnitude
• National Institute for Occupational Safety and Health (NIOSH), Pittsburgh, PA,
May 2000-August 2000
o Mining Engineering Technician
o Processing and analysis of microseismic data
• Martin Marietta Materials, Charlotte, NC, May 1999-August 1999
o Mining Engineering Intern
o Environmental permitting and monitoring of Charlotte district quarries
• Cyprus (RAG) Emerald Resources, Waynesburg, PA, January 1999-May 1999
o Mining Engineering Co-op Student
o Mine mapping, surface structure design work, gas well location, noise
surveying
• Virginia Polytechnic Institute and State University, Blacksburg, VA, August
1998-December 1998
o Women in Engineering Support Team (WEST) Leader
o Mentoring freshman engineering students
• Cyprus (RAG) Emerald Resources, Waynesburg, PA, May 1997-December 1997
o Mine mapping, creating technical drawings, ventilation surveying, ground
control work
Professional Associations
• The Society for Mining, Metallurgy, and Exploration (SME)
• Society of Women Engineers (SWE)
114
|
Virginia Tech
|
OPTIMIZATION OF INTEGRATED COAL
CLEANING AND BENDING SYSTEMS
C. Lance Wimmer
ABSTRACT
The fundamental requirement for a coal preparation plant is to transform low value run-
of-mine (ROM) material into high value marketable products. The significant aspect relative to
the plant is that any gain in efficiency flows almost entirely to the “bottom line” for the
operation. The incremental quality concept has gained wide acceptance as the best method to
optimize the overall efficiency of the various cleaning circuits. Simply stated, the concept
requires that all the cleaning circuits operate as near as possible to the same incremental quality.
To ensure optimal efficiency, a plant that receives ROM feed from multiple sources must
develop a strategy to operate at the same incremental quality, which yields wide ranges in
product qualities from the individual ROM coals. In order to provide products that meet contract
specifications, clean coal stockpiles can be utilized to accept coals with various qualities, such as
“premium,” “low,” and “filler” qualities, with shipments formulated from the stockpiles to meet
product specifications. A more favorable alternative is raw coal blending to produce the specified
clean coal qualities. This study will review the incremental quality concept and present case
studies in applying the concept to meet product specifications.
|
Virginia Tech
|
1.0 INTRODUCTION
1.1 Preamble
Coal preparation or processing is the process of cleaning raw coal generated by surface or
underground mining. Coal preparation has evolved throughout the decades to what it is today,
based on the needs of the industry. Currently, preparation plants can be ran and monitored
remotely and have evolved into a highly efficient and productive backbone of the coal industry.
Here in Central Appalachia, the coal that is mined varies drastically in height and quality. This
means there are some mines that have the ability to use longwall mining method to extract coal
while the majority of the companies have to use room and pillar methodology to mine low/thin
seams. For the companies that mined the low seams, the quality has multiple constraints that
determine its value, such as ash, sulfur, BTU quality (steam or metallurgical classification), etc.
Most of the companies have to have a wide variety of mines at their disposal to meet their
contract specifications. With multiple mines in multiple seams, coal preparation is a key factor
for the survival of these companies.
Coal preparation has evolved from manual selective mining of only the “pure” coal
particles during the mining process to sophisticated high tech solids-solids and liquid-solids
separation processes. The first mining efforts focused on extracting a saleable product from the
mine using picks and shovels to hand load only the coal seam. The use of explosives and
mechanized loading increased production rates from the mines, but also delivered ROM coals
that contained unsaleable material. The coarse particles were manually rejected by “refuse
pickers” and, depending on the quality, the small amount of fine particles was blended with the
coarse “clean” coal or rejected to waste piles. As mechanized mining developed to further
increase production rates, the portion of fine particles in the ROM also increased. Coal
1
|
Virginia Tech
|
preparation technology had to progress to continually meet the demands to process the increased
portion of the fine ROM coal. Today, coal preparation plants routinely process all the ROM coal
from 6-8” top size to zero-size particles in multiple parallel circuits based on the size of the
particles.
Due to economic considerations, the optimization of coal preparation operations is
essential to ensuring the viability of coal mining operations. The goal of the optimization should
be to lower production costs and increased profits. In particular, coal blend optimization (on the
clean or raw coal side) is a primary focus area that coal companies must be aware of. An issue
that affects Central Appalachia is that it is hard for some companies to have adequate blending
facilities (stockpile areas) due to the steep terrain. In order to provide products that meet contract
specifications, clean coal stockpiles can be utilized to accept coals with various qualities, such as
“premium,” “low,” and “filler” qualities, with shipments formulated from the stockpiles to meet
product specifications. Also, there is a more favorable alternative method, raw coal blending, to
produce the specified clean coal qualities for the specific shipment. These alternatives must each
be explored to identify the most appropriate strategy of blend optimization for each mine site
involved in coal production.
1.2 Literature Review
1.2.1 Plant Optimization
Coal preparation is as critical to the coal industry as the entire coal extraction process.
Since the inception of the modern coal washing facility in the 1940-50s, scientists, researchers,
coal companies, manufacturing companies, etc., have been looking to improve all aspects of this
process. In the past few decades, more and more time and money have been dedicated to the
further advancement of coal optimization abilities. The previous research in the optimization
2
|
Virginia Tech
|
field has included incremental product quality, coal blending optimization, equipment modeling
and improvements, computer simulation software, use of cost value/control strategies, etc.
Efficient processing performance for a typical preparation plant depends on recovering
all the valuable particles from the ROM and rejecting material that fails to support the cleaned
product specifications. Early ideas for producing products within specifications (especially for
the ash content) centered on operating each parallel processing circuit to provide the specified
product ash, thus ensuring that the final product would meet the specification. This concept was
simple to understand and easy to employ. Alternate operating philosophies that illustrated
improved recovery can be achieved by adjusting the specific gravity (SG) separating cut-points
to near the same value for all the circuits were introduced (Dell, 1956; Abbott, 1982). This
concept was based on graphical solutions developed from extensive washability (float/sink)
analyses. As expected, most plants relented to operating under the simple concept of merely
producing the specified product qualities in each individual circuit.
1.2.2 Incremental Quality Concept
By definition, the incremental quality concept is operating all cleaning circuits as near as
possible to the same incremental quality cut-point, which will achieve maximum plant yield.
This concept has been widely accepted by industry because it provides useful data that aids in the
design, operation and controls strategies, and overall maximum plant optimization. Cebeci used
the incremental ash concept to achieve maximum plant yield in the Zonguldak region. Their
research involved Drewboy Heavy Medium (HM) bath and HM cyclones to determine the
desired cut-points at specific target ash. The results of the study show that using this approach
will increase the plant yield from 24% to 30.71% at a target ash of 9.5%, and additional plant
yield increased to 33.41% for a target ash of 11.61%. Gupta agrees with the fact that the
3
|
Virginia Tech
|
incremental quality concept will achieve maximum plant yield, but one for one specific quality
constraint. The paper suggests when you have multiple quality constraints the incremental
concept can become very complex and give multiple flawed solutions, using an example of ash
and sulfur as the desired constraints. The theoretical plant yield using incremental ash will be
different when compared to the plant yield using incremental sulfur. A solution to multiple
constraints is to try and combine constraints and form a new constant, then the incremental
approach of those new constants can be used, for example, to combine ash and sulfur into sulfur
dioxide emissions (Gupta, 2005). In a study performed by Mohanta using the incremental
quality concept, they developed a spreadsheet based program to take inconsideration of
equipment imperfections, feed quality variance, and price structure. Their results showed that
this method can maximize the value of clean coal, while aiding in deciding key operating
parameters and imperfection of the washing equipment for various coal feeds.
1.2.3 Numerical Optimization
Other methods using simulation software include developing preparation plant flowsheets
and generating associated equipment models. Numerous companies and researchers have used
flowsheet optimization in the past 40 years. The general thought has been to breakdown the coal
preparation process into blending, sizing, and separation in a steady state scenario. The goal of
this method is to achieve the maximum yield of clean coal at a specific constraint, such as ash
content, by optimizing the separation equipment’s cut-points (Rong, 1992). This method can be
applied to existing plants and aid in the construction of new state of the art plant. The flowsheet
simulation is dependent on the characteristics of the feed coal and contract specification; this will
determine the necessary combination of crushers, screens, washers, etc. that will achieve the
desired outcome (Abara, 1979). The work of flowsheet optimization is an ongoing task in the
4
|
Virginia Tech
|
coal industry, due to the creation and modification of cleaning equipment and processes,
additional models are needed and will continue to be needed in future work for accurate
simulations.
1.2.4 Value/Cost Optimization
Some researchers have taken the approach to optimize coal preparation through the use of
value and/or cost strategies. A new method to the coal industry is the Internal Value Chain
approach, which was originally proposed by Michael Porter of Harvard Business College in
1985. In layman terms, the approach means that a company acquires raw materials and uses
those materials to produce something of value or usefulness, this means the more value that is
created the more profitable the company is. The approach is divided into two activity groups, the
primary and support groups. The primary group involves activities that directly relates to the
creation, sales, maintenance, and service of the product while the support group’s activities
support the primary groups, such as procurement or human resources. An and Zhang took this
concept and applied it to the XSMD Taiyum coal preparation facility, stating how significant it
was to integrate the internal value of coal preparation to the entire coal industry. They performed
their research using a quantitative analysis method which was a combination of analytic
hierarchy process (AHP) and fuzzy linear programming (FLP), along with a time factor. From
their findings, they suggested areas of improvement based upon value per activity on a cost basis
to the plant manager, such as a need to look at coal blending sales (variety of the coal and uses)
and heavy medium floatation. Using this method of value chain optimization, an operation can
control cost to achieve maximum optimization based on total cost to value and results in
maximum profits (An 2013).
5
|
Virginia Tech
|
1.2.5 Chance Constrained Programming
Shih performed a study that focused on coal blending uncertainty and variability of coals,
referring to sulfur content, ash content, and heating value. In this study, they used a Chance
Constrained Programming (CCP) technique for coal blending decisions, which takes
inconsideration the variability inherent in coal quality. The purpose of the study was to minimize
expected and standard deviation of blending costs and sulfur emissions, as well as to determine
what the associated tradeoffs are. One case studied involved the tradeoff between expectations
and standard deviation of cost. This case stated that an operator might be willing to give up
certain cost saving measures to reduce the standard deviation of operating conditions to achieve a
more steady operating plant. Another case study focused on expected and standard deviation of
sulfur emissions costs, stating that a 10% reduction in sulfur emissions results in a 19% cost
increase in coal blending (Shih, 1995). The results from the CCP work shows that when they are
competing objectives, such as cost savings versus emissions, this technique can identify
reliability and minimize standard deviations concerns to aid in plant decision making.
1.2.6 Genetic Algorithm Optimization
Simulation software has been around for several decades. In the past few years this
software has evolved through the use of genetic algorithms (GA), but has had issues and
problems along the way. Xi-Jin performed a study to address the problems associated with GA in
coal blending optimization by using adaptive simulated annealing genetic algorithms (ASAGA).
They based their study upon three key optimization parameters: to find the minimum percent of
high quality coal and the maximum percent of low quality coal, to find the lowest cost and
largest profit, and to determine the optimal ratio of cost to performance. Their coal blending
6
|
Virginia Tech
|
model encompasses two scenarios: using raw coal and cleaning it below the target ash, then
perform blending activities and the other option is to clean the raw coal to meet the contract
target ash specification. In addition to the coal blending model, they used a scheduling model,
which was responsible to schedule which coal will be clean according to the maximum economic
benefits, which assumes all cleaning cost are similar. From their results, ASAGA can achieve
significantly better results by using economic benefits as the primary objective to establish the
coal blending parameters, when compared to other GA methods.
1.2.7 Micro-Price Optimization
The “micro-price” optimization method is another approach for economic optimization of
plant operations (Luttrell et al., 2014). This method by definition assigns unit values (positive,
zero, or negative) to each individual types of product that passes through the operations. This
method relates the coal industry as being the same as any other commercial business, e.g., retail
sales or manufacturing. Based on this concept, Luttrell developed five key questions that coal
producers must answer. The first question is “What are we trying to sell?” This question pertains
to the material generated through the mining process. Pure coal should have the highest value,
middling particles will have a lesser value compared to coal and the rock content, while pure
rock will have no value and will incur a cost penalty. The next question is “Who are our
customers?” This refers to which market the coal will be sold on, metallurgical or steam market.
This question may seem simple but has implications on meeting contract specifications, such as
heating value on the steam market or ash content on the metallurgical. The next question is
“What will our customers pay?” This refers to price per ton and associated premiums above
contract specifications and penalties with ash content. The final question is “What inventory is
available?” This refers to coal washability data of inventory and sorting by density versus
7
|
Virginia Tech
|
quality. The inventory represented by good quality should be separated and sold to customers
looking for high quality coals, while the lesser quality should be separated at the zero worth
value and sold on the low quality markets (Luttrell 2013).
1.3 Objectives
The primary objective of this study was to identify practical systems for coal blending
that utilize the incremental quality concept to maximize clean coal production and profitability.
To this end, several series of mathematical simulations were conducted to quantify the impact of
different control approaches for maintaining a consistent product consistency. Approaches
evaluated included (i) operation of cleaning operations at a constant clean coal quality via the
manipulation of plant density cut-points, (ii) operation of cleaning operations at a constant
incremental quality after blending of raw coal products to maintain a constant clean coal quality,
and (iii) operation of cleaning operations at a constant incremental quality followed by blending
of clean coal products to maintain a constant clean coal quality. The economic drivers in each
case were examined in addition to site specific advantages of disadvantages of each approach.
8
|
Virginia Tech
|
2.0 BLEND OPTIMIZATION
2.1 Optimization Theory
Coal preparation, for the most part over the years, has been considered a “necessary evil”
by coal company management. In many cases plants were “pushed” to process maximum tons
without regard to the separating efficiency or the loss of saleable coal to the reject. Within the
last two or three decades, the preparation management at several major coal companies have
“educated” operations management to the fact that:
• almost all the benefits for process improvement translate directly to the “bottom line”
• major improvements can be accomplished with very little investment
• a small percent in improvement (such as additional saleable material) provides a huge
return due to the large volume of material processed.
The basis for the new era in coal preparation is the philosophies introduced by (Dell,
1956, Abbott, 1982), wherein the best performance for parallel circuits is achieved by operating
at the same specific gravity (SG) cut-point. Although the earlier graphical solutions based on
extensive washability data indicated that the cut-points should be very near the same, the current
practice is to control each parallel to separate at the same SG cut-point. This concept, known
generally as the incremental quality concept, has been refined by Luttrell and Stanley (Luttrell et
al., 2000) and adopted by the major coal companies in the U.S. and promulgated through
management workshops and operators training sessions.
To illustrate the incremental quality concept, consider two dissimilar raw coals as shown
in Figure 1 using simple blocks to represent the ash content of various particles. In order to
produce the same specified ash quality for each of the raw coals, such as 25% ash, the particles
are separated at different individual (incremental) particle values, as shown in Figure 1(a).
9
|
Virginia Tech
|
The overall yield resulting for this case is 64%. Notice that in Figure 1(a), particles with 50% ash
content are recovered for Coal “A” but rejected for Coal “B”. Figure 1(b) illustrates separating
the raw coals at the same SG cut-point. Although the product ash content is different for each
coal at the same cut-point, the combined products deliver the specified ash content. The major
difference is the improvement in the number of particles recovered when separating at the same
cut-point. The simple act of separating at the same cut-point, which is at the same incremental
quality, increased the yield from 64% to 72%. This improvement translates into a major increase
in revenue when considering the tonnage processed in many large preparation plants and the cost
to achieve the improvement—a simple change in operating philosophy.
The incremental concept is applicable for a single plant with parallel processing circuits
treating different size fractions of the ROM coal. A collection of washability analyses covering
various size fractions indicates a linear correlation with respect to incremental ash content versus
the inverse of the SG (1/SG), as shown in Figure 2. A typical plant will incorporate four parallel
cleaning circuits:
• coarse circuit, >12 mm, dense medium bath
• small circuit, 12 mm x 1 mm, dense medium cyclone
• fine circuit, 1.00 mm x 0.15 mm, spirals or reflux classifier
• ultra-fine circuit, <0.15 mm, flotation
The coarse and small circuits treat the major portion of the ROM and, therefore, must be
operated at the same SG cut-point. The fine circuit operates at a slightly higher SG cut-point,
which compensates for a lower processing efficiency, to yield the same incremental quality. A
well maintained flotation circuit can typically be operated at maximum performance without
reaching the same incremental quality cut-point of the other circuits.
11
|
Virginia Tech
|
Furthermore, the collection of washability analyses noted above also indicates that the
same linear correlation with respect to incremental ash content versus the inverse of the SG
(1/SG) is valid for various coal sources, also shown in Figure 2. This correlation expands the
application of the incremental quality concept to include different plants that are processing
ROM for the same product quality. For large coal companies with a wide range of ROM sources,
this concept provides opportunities to optimize individual plant production by establishing
blending programs to meet the specified quality at the final shipping point, or at any point
between the plants and the shipping point.
ROM coals feeding a coal preparation plant are inherently variable due to the changing
conditions in the mines. A washability analysis for a ROM coal is only an indication of the raw
quality for the feed to the plant at the time the sample was taken. The variability may be the
result of a change in the conditions and mining areas for a single mine or receiving ROM coal
from multiple sources, or a combination of both. The plant, in turn, must accommodate the
variability and transform the ROM coal into a product at the specified quality with a minimum of
variability. State-of-the-art plant design, equipment, and automated process control systems have
the capability to deal with the ever-changing ROM feeds.
One method employed by plant management over the years was to continually adjust the
SG cut-point in the processing circuits to always produce the specified product qualities. If the
product ash content varied above the specified limits, the SG cut-points were lowered to reduce
the ash content, and the SG cut-points were raised to increase the ash content. As illustrated in
Figure 1, this operating method would recover high ash incremental material at times while
rejecting the same material at other times when the feed conditions changed.
13
|
Virginia Tech
|
• operate at a constant SG cut-point and direct the clean product to either a low ash
stockpile or a high ash stockpile, then blend from the two stockpiles to deliver the
specified clean coal ash content
• operate at a constant SG cut-point and blend the plant feed from a high ash stockpile and
a low ash stockpile to deliver the specified clean coal ash content.
An illustration of the preparation plant and stockpile configurations for the three strategies is
shown in Figure 3. The strategy incorporating the simple philosophy of controlling each cleaning
circuit to deliver the required product ash is shown in Figure 3(a). This configuration requires
only one raw and one clean coal stockpile. An ash analyzer monitors the plant product and
adjusts cut-point SG for the cleaning circuits to insure each circuit produces the required product
ash content.
The second strategy, shown in Figure 3(b), depicts processing the ROM from a single
stockpile, operating the plant at a constant cut-point SG, segregating the plant product into two
clean coal stockpiles (based on ash content), and blending from the stockpiles to ship the
required product ash content. An ash analyzer is required to monitor the plant product in order to
segregate the clean coal between the high and low ash stockpiles. A second analyzer is required
to blend the high and low ash coals to the required product ash content during shipping.
The third strategy, segregating the ROM into two stockpiles (based on ash content of the
clean coal), is shown in Figure 3(c). The plant processes the raw coal at a constant cut-point SG
and the feed is blended from the ROM piles to produce a plant product with the required product
ash content. The ROM coals must be characterized and segregated (typically by source) between
the two raw coal stockpiles. An ash analyzer is required to monitor the plant product and to
adjust the portions from the ROM stockpiles.
15
|
Virginia Tech
|
The simulations provide a simple example of how adopting the incremental quality
concept can impact plant clean coal production and quality specifications. In actual practice,
employing the concept to operational situations would require a thorough review of the ROM
sources, the operating capability of the plant (including areas for stockpiles), and collaboration
with the company marketing group.
2.2.2 ROM Feed Source
A Microsoft Excel worksheet was developed to simulate washability data that would
represent a variable run-of-mine (ROM) feed source to a plant and operating data for 90 shifts (3
shifts per day for 30 days). The data includes plant ROM feed tons, ROM ash content, clean coal
tons, clean coal ash, and plant yield. The basis for the simulated washability was a double
random sinusoidal variation in the proportions of coal and rock present in each SG fraction. The
periods for the sinusoidal variations were different which provided different simulation
parameters, thus a different washability analysis for each of the 90 shifts. A plot of the
washability analysis for the ROM feed ash content and plant yield at 7.5% clean coal ash for
each shift are shown in Figure 4.
17
|
Virginia Tech
|
2.3 Results and Discussion
2.3.1 Overview
The simulations provided an easy to understand example of three strategies that are used
for coal preparation. The first strategy, operating all the circuits in a plant to produce a constant
specified product ash content by adjusting the SG cut-point in each cleaning circuit, has been
widely used, because it is simple to understand (if each circuit is producing the specified product
ash, then the total product will meet specification) and easy to institute (if the product ash varies
above the specified value, lower the SG cut-point, and vice versa). The second and third
strategies incorporate the incremental quality concept to operate all the circuits in a plant at the
same SG cut-point, which, in turn, will recover (and reject) the same incremental ash in each of
the circuits. The SG cut-point is set and maintained at a value that produces the specified clean
coal ash content. Table 1 shows the results of the simulations for the three operating strategies,
which are discussed in the following sections.
2.3.2 Optimization Strategy I – Plant SG Control
The simulation of continually adjusting the SG cut-point to produce the specified clean
coal ash content provided 144,511 tons of clean coal from 360,000 tons of ROM feed at 40.14%
yield, as shown in Table 1. The average SG cut-point was 1.70 SG, ranging from 1.40 to 2.16
SG. The high SG cut-point indicates that very high incremental ash material was required at
times to produce the specified clean coal ash content. The low SG cut-point indicates that
relatively low incremental ash material was rejected at times to produce the specified clean coal
ash content.
23
|
Virginia Tech
|
The strategy of plant SG control clearly illustrates that the clean coal shipped contains
both high ash components and low ash components, and that both components are recovered or
rejected depending on the ROM washability for each shift. Thus, non-valuable high ash material
was shipped while valuable low ash material was rejected during the simulated 90 shifts. Under
this strategy, the clean coal can be loaded as processed for any size lot. Storage area is required
to handle the clean coal accumulation between shipments.
2.2.4 Optimization Strategy II – Clean Coal Blend Control
The results for the clean coal blending strategy are shown in Table 1. The simulation
provided 148,285 tons of clean coal from the 360,000 tons of ROM feed at 41.19% yield. The
simple act of maintaining a constant SG cut-point for the entire 90 shifts period improved the
yield from 40.14% to 41.19% for a 1.05% increase and a 2.61% increase in organic efficiency.
Under this strategy, clean coal storage area must be provided to accumulate the
proportions required in each stockpile to ship the specified tons for each lot. The reclaim system
must be capable of continually varying the proportion from each stockpile. An online ash
analyzer monitoring the ash content at the loading point would provide the optimal control for
blending from the stockpiles for the specified clean coal ash.
25
|
Virginia Tech
|
2.2.4 Optimization Strategy II – Raw Coal Blend Control
The results for the raw coal blending strategy are shown in Table 1. As expected, the
simulation provided the same 148,285 tons of clean coal from the 360,000 tons of ROM feed at
41.19% yield. The same improvement was gained with a simple change in the operating
philosophy.
The most significant advantages for blending raw coal for the plant feed, as illustrated in
Figure 6 and detailed in Table 1, are the reduction in the range of ash content in the plant feed
and the reduction in the range of yield for the coals processed. Operating under the first two
strategies required the plant to process raw coals with a wide range in ash content —33.73-
57.63%. Blending the raw coals for the plant feed provided a more consistent ash content in the
feed — 44.84-49.89%. The yields for the first and second strategies ranged from 29.38 to
48.76% and 27.79 to 58.11%, respectively. The range of yields resulting from blending the raw
coals for the plant feed was 37.84 to 45.74%. Operating a plant with more consistent feed
conditions and product yields reduces large swings in the material flows throughout the plant,
which at times may require de-rating of the plant capacity to accommodate the high flow rates
and equipment loading.
Under this strategy, raw coal storage area must be provided to allow the segregation of
the ROM as it is delivered and the reclaim system must be capable of continually varying the
proportion from each stockpile. An online ash analyzer to monitor the plant clean coal product
would provide the optimal control for blending from the ROM stockpiles for the specified clean
coal ash.
26
|
Virginia Tech
|
3.0 SUMMARY AND CONCLUSIONS
Coal preparation, considered to be a “necessary evil,” has evolved into a high tech and
integral part of a mining operation. A modern preparation plant transforms low value run-of-
mine (ROM) material into high value marketable products. Although the individual processing
components have been designed to function very efficiently, the overall plant efficiency can be
severely affected by the operating strategy. The significant aspect relative to the total plant is that
any gain in efficiency flows almost entirely to the “bottom line” for the operation. The
incremental quality concept has gained wide acceptance as the best method to optimize the
overall efficiency of the various cleaning circuits. Simply stated, the concept requires that all the
cleaning circuits operate as near as possible to the same incremental quality. To demonstrate the
impact of operating a plant under the incremental quality concept, simulations were developed to
compare three operating strategies.
The first strategy was based on the premise of producing the specified product quality
from all the circuits by adjusting the SG cut-points for the circuits, thus ensuring the final
product meets the specification. The second and third strategies employ the incremental quality
concept (constant SG cut-point) and utilized raw coal or clean coal blending to control the final
product quality. The simple exercises, based on a simulated ROM washability, illustrated that the
plant yield was improved from 40.15% for the first strategy to 41.19% for the second and third
strategies. Although the improvements were achieved without any modifications to the plant
circuits, and may appear to be relatively small, the impact is very evident when considering that
a metallurgical producer operating 6,000 hours per year could deliver 31,500 additional tons to
the market. Based on a market value of $100 per ton, the “small” improvement would provide
over $3 million in additional revenue.
28
|
Virginia Tech
|
Increasing Haul Truck Safety with Pre-Shift Inspection Training
Adam M. Schaum
ABSTRACT
On average, there are approximately ten fatal haul truck accidents per year in the
United States. The most common causes for haul truck accidents include mechanical
problems, inadequate training, and insufficient road/berm maintenance. Due to the
frequency and magnitude of haul truck accidents, new training methods are being
investigated. With the widespread availability of inexpensive and powerful computers
and newer information technology, the ability to incorporate computer based training for
miners is becoming more of a possibility. Computer based training is as effective in
knowledge acquisition as traditional lecture, and computer based training can also lead to
a significant increase in the retention of material. Studies have also shown that more
engaging training methods lead to much more effective knowledge acquisition.
A computer-based virtual environment training system was developed to
supplement current new miner training and address the common causes of fatal accidents.
The new training system is a virtual pre-shift inspection of a haul truck, and will train the
beginner haul truck operator to identify parts which look defective compared to how the
parts look normally. The training will increase the operator’s ability to recognize
problematic parts and correctly identify the corrective action needed. Increasing the
quality of training by providing a very engaging simulated hands-on environment will
lead to safer behaviors by the trainees, and ultimately fewer accidents and fatalities.
|
Virginia Tech
|
reduce the chance a haul truck crashes into another truck, or falls over the highwall. Collision
avoidance systems do not address the two most common contributing factors to fatal accidents,
mechanical problems and lack of training. Current systems focus more on preventing the
accidents where lack of communication and road/berm problems are the contributing factors. To
address the more common contributing factors to fatal accidents, a supplemental training system
was developed. The training system is a virtual pre-shift inspection of a haul truck and will
contain three main sections. First, the user will be guided through the proper process of
conducting a pre-shift inspection. Second, the user will need to conduct a pre-shift inspection
within the virtual environment. Third, the user will be shown an animation of any part missed in
the pre-shift inspection failing in a worst case scenario. The training system will work to
encourage miners to perform a proper pre-shift inspection on the haul truck prior to operation.
Identifying mechanical problems before the truck begins operation will reduce the number of
fatal accidents which are caused by mechanical failures.
1.1 – Haul Tucks
Haul trucks are among the most common pieces of equipment found at any mining site
and can be rather large. Since haul trucks are such a common piece of equipment, numerous
accidents involving haul trucks occur. From 1995 through 2006, there were 108 haul truck
accidents which resulted in a fatality. The physical dimensions of haul trucks are rather bulky.
The length, width, and height of haul trucks can range upwards of 48, 33, and 23 feet
respectively. Capacities of haul trucks can be upwards of 380 tons (797b Haul Truck). Fully
loaded haul trucks have an added safety concern because the added weight requires longer
|
Virginia Tech
|
driver makes throughout their shift. Haul trucks often interact with other mobile
equipment and stationary dumping stations. The mobile equipment often includes
shovels, front-end loaders, and pickup trucks. Shovels and loaders, being large in size,
can easily be seen. Pickup trucks can hide in the blind spots of haul trucks when they are
close to one another.
It is necessary for miners to be sufficiently trained to operate haul trucks. It is
also important that miners who do not operate haul trucks to be aware of the safety
hazards which they impose. Miner training for surface operations is outlined in 30 CFR
Part 48 Subpart B. It is also important that all miners understand that the maintenance
and upkeep of haul trucks is important for their safe use. All equipment should be
checked prior to use to ensure the machine is in good working order. If the haul truck
does need maintenance, only qualified persons should perform the repair.
1.2 – Virtual Environments
Virtual environments (VE) provide an environment which a user can be fully
immersed. Caird defines virtual environments as “three-dimensional graphic images that
are generated by computers for the expressed purpose of cognitive and physical
interaction” (Caird 1996). Immersive environments are accomplished by allowing the
user to interact with the three-dimensional objects on the screen. While inside the virtual
environment, the user can travel and look at any object freely. While the terms “virtual
reality” and “virtual environments” can often be used interchangeably, this paper will use
“virtual reality” when discussing non interactive systems and “virtual environments”
when discussing interactive systems.
4
|
Virginia Tech
|
Chapter 2 – Contributing Factors of Haul Truck Fatalities
From 1995 through 2006, there were 108 haul truck accidents which resulted in a
fatality. This number of fatal injuries sustained from haul truck accidents can be reduced
with increased training. The virtual pre-shift inspection training will work to reduce the
occurrence of fatal accidents by better training operators to identify possible hazards, and
realize the appropriate steps to ensure both their safety and the safety of the personnel
around them. To ensure that the training system will positively affect the occurrences of
fatalities, two studies were completed to identify the principal causes of haul truck
accidents and fatalities. The first study completed was an analysis of all fatal accidents
since 1995, and a statistical analysis to confirm the significant causes of haul truck
fatalities. The second study looked at the relationship between experience and haul truck
accidents (which may or may not have resulted in fatal injuries).
2.1 – Fatalgram Analysis
To determine the most common causes for fatal haul trucks accidents, the fatal
accident reports and fatalgrams from MSHA were analyzed for every fatality involving
haul trucks since 1995. It became apparent that mechanical failures, inadequate training,
seatbelt misuse, and insufficient berms were the four most common problems.
Mechanical failures were accidents where there was a mechanical problem with the haul
truck. Inadequate training was specified for accidents where either the operator, or
another employee involved in the accident was not sufficiently trained to operate or be
around mobile equipment. Seatbelt misuse was classified for accidents where the driver
6
|
Virginia Tech
|
was not wearing a seatbelt. Insufficient berms were classified for accidents where the
haul truck came into contact with the berm, but it was either improperly constructed or
did not meet MSHA standards. The berm classification was also given to accidents
where haul trucks went over a highwall where no berm was present. A spreadsheet was
created with each accident classified into four categories listed above. Other factors were
listed, but no other cause contributed to more than five accidents.
From 1995 through 2006, there were 108 haul truck accidents which resulted in a
fatality (MSHA “Fatalgrams”). Each accident report was read to find the underlying
causes; the results are illustrated in Figure 2.1. Mechanical problems were present more
frequently than all other factors.
50
45
40
35
30
25
20
15
10
5
0
Contributing Factor
7
stnediccA
fo
rebmuN
Seatbelt Road/Berm Mechanical Training Other
Figure 2.1 - Contributing Factors to Fatal Accidents from 1995 to 2006 for All Mines (MSHA
“Fatalgrams”)
When looking just at haul truck accidents in metal/nonmetal mines there were 67
accidents which resulted in a fatality. The distribution of the common haul truck
|
Virginia Tech
|
seatbelt, the severity of the sustained injuries could have been lessened if one had been
worn.
2.2 – Statistical Determination of Significant Causes
To identify the primary causes of fatal haul truck accidents, a t-test was conducted
on the data collected from the fatalgram analysis. The statistical analysis was completed
using the software program JMP (SAS 2006). For the test, the top five causes of fatal
haul truck accidents were used and treated as different factors and interactions of training
and seatbelt misuse, lack of communication, and mechanical failures were initially
accounted for. Using a confidence coefficient of α = 0.05, an analysis of variance
(ANOVA) model was fit to the data. The t-test uses the null hypothesis that the
coefficients of the factors in the model were zero (indicating that the factors could be
ignored). The alternative hypothesis was that the coefficients of the factors were not zero
(indicating that the factors could not be ignored in the model). The first model that was
fit to the data indicated that the interaction effects between training and both lack of
communication and seatbelt misuse were not significant (their p-values were 0.126 and
0.605 respectively). A second model was then fit, disregarding the insignificant
interaction effects. All factors were found to be significant, with the p-value for all
factors (except lack of communication) less than 0.0001. The p-value for lack of
communication was determined to be 0.038, and the interaction between training and
mechanical failures obtained a p-value of 0.0004.
The factors which were determined to be significant are the most likely causes for
haul truck fatalities. The model predicts that haul truck accidents in the future will likely
9
|
Virginia Tech
|
“powered haulage.” To reduce the amount of data in the database, only accidents that
resulted in death or serious injury were used. This process was completed for data from
the years 1994 through 2004. Three types of experience were reviewed; total mining
experience, experience at the current mine, and experience at the current job. When
looking at the data, the units of time could not easily be determined. Several entries
could either have been in weeks or years, and a small portion of the data contained work
experience in weeks which is either longer than the miner has lived or could have
reasonably worked. Because the units of time could not be sufficiently determined,
statistical analysis was not completed to look at the effects of age and experience. The
data was deemed unreliable for quantitative study, but reasonable assumptions were made
in regards to the units of time and the analysis was completed qualitatively.
Focusing on total mining experience, more than half of all haul truck accidents
occur within the first two years of the operator working in the mining industry. The data
also indicates that the accident frequency becomes smaller as total experience increases.
When looking at all haul truck accidents from 1994 to 2004, roughly one third of all
accidents occurred within the first year of the workers total mining experience.
Disregarding total mining experience and focusing on experience at the current
mine, the frequency of haul truck accidents was reviewed. More than two-thirds of all
truck haulage accidents occurred at the mine where the worker was first employed. It is
apparent that the accident frequency for workers with less than one year of experience at
the current mine is much greater than the frequency for the same time of total mining
experience.
11
|
Virginia Tech
|
Only focusing on the experience from operating haul trucks, the accident
frequency was once again reviewed. Similar to total experience, roughly one third of all
haul truck accidents occur within the first year of the operator working at a mine. This
comparison disregards experience other than the current job. Similar to the current and
total mine experience, the majority of the accidents occur with little experience.
The trend of all types of experience with accident frequency for haul trucks shows
that the majority of all accidents occur with less than a year of experience. For the total
mining experience and the current job experience, there were fewer accidents with five or
less weeks than ten to five weeks. This could partly be due to the training which often
requires an experienced worker to ride along with a new worker. The sudden rise in
accidents could be the result of the worker becoming comfortable with operating the haul
truck and becoming less thorough during pre-shift safety inspections. To effectively
address haul truck safety, continued training and emphasis on safety needs to be
emphasized. Keeping the worker focused on safety will help to reduce the number of
accidents with haul trucks.
When looking at accident frequency with age, it appears that there is little age
effects. Once again this data is unreliable (some miners had an age of zero), but there did
appear to be a rough normal distribution of accidents with the mean around 42 years.
Further analysis cannot be conducted without knowledge of the ages of haul truck drivers
in the workforce.
2.4 – Summary of Accident Causes
There are many contributing factors to fatal haul truck accidents. Experience,
mechanical malfunctions, lack of berms, communication, and blind spots are the leading
12
|
Virginia Tech
|
causes of haul truck accidents. There are solutions currently in place to help reduce the
number of haul truck accidents. The solutions range from simple worker training to new
technology which aids the equipment operator.
There is a trend for more accidents to occur with less experience. The lack of
experience leads to other contributing factors to accident causes. Someone with little to
no experience might have trouble identifying hazards while driving, or simply not know
enough about the haul truck to know if there is something not working properly.
Training is the most effective solution to lack of experience.
The majority of haul truck accidents are caused by some mechanical malfunction.
It was found that in most fatalities with a mechanical cause, lack of training was also a
contributing factor. The major downfall of pre-shift inspections is that if the operator has
never seen damaged equipment, then they might not be able to recognize mechanical
problems when they see it. Once again better training in recognizing possible mechanical
problems might reduce the occurrence of accidents which mechanical malfunctions are
the cause.
Berms are a necessary safety measure which will keep haul trucks from traveling
over a highwall. If the berm is not properly constructed or not there entirely, then there is
nothing preventing a haul truck from traveling over a highwall. The only solution for
inadequate berms is to construct berms which meet MSHA standards. The current
MSHA standard for berms is located in 30 CFR Part 56.9202.
Accidents which occur due to lack of communication between the driver and
other personnel in the area result in accidents which usually result in the driver having
little to no injury. It does, however, lead to accidents where a haul truck runs over a
13
|
Virginia Tech
|
worker or another smaller vehicle. Blind spots coincide with communication problems in
haul truck accidents. Most workers do not know exactly the blind spots of larger
equipment if they have never operated them. Providing sufficient communication
between vehicles and equipment will help reduce the haul truck and vehicle collisions.
Ensuring that all workers know and understand the blind spots of larger equipment will
also help reduce the accidents where communication is a cause. The safety hazards of
larger equipment could be covered in miner training so that all mine workers are aware of
the hazards of haul trucks. New technology is being developed and is already available
which helps reduce accidents involving blind spots. Currently, there are more
technological solutions to blind spots than any other cause of accidents.
Knowing the principal causes of fatal haul truck accidents, the currently available
techniques to increase safety needs to be identified. Chapter 3 identifies the current haul
truck safety techniques to determine which of the contributing factors for haul truck
fatalities are not specifically addressed.
14
|
Virginia Tech
|
Chapter 3 – Current Measures to Increase Haul Truck
Safety
To determine what can be done to reduce the number of fatal haul truck accidents,
the current measures to increase haul truck safety needs to be reviewed. The current
literature and federal regulations about haul truck safety was reviewed to determine what
is currently available for haul truck safety. It was found that there are many areas which
are currently being addressed to reduce fatal haul truck accidents. Federal regulations,
miner training, and haul truck safety systems are currently available measures which
work to reduce fatalities. While current measures do increase safety and work to reduce
fatalities, they do not fully address the two principal contributing factors to haul truck
fatalities (lack of training and mechanical problems).
3.1 – Current Federal Regulations on Mine Haul Truck Safety
Due to the inherent safety risks involving mining, the Mine Safety and Health
Administration has defined regulations concerning the required safety training that each
miner must go through. For newly hired inexperienced miners, a total of 24 hours of
training is required (four of which needs to occur before the miner begins working).
Until the 24 hours has been reached, the miner must work alongside an experienced
miner who can observe that the new miner is working safely. Specific subject areas
which are required to be covered in the miner training is covered in the Code of Federal
Regulations as 30 CFR § 46.5. Similarly to new miners, newly hired experienced miners
must also undergo training. Training for newly hired experienced miners is focused more
on site-specific characteristics and reinforcing the miner’s statutory rights (newly hired
15
|
Virginia Tech
|
experienced miner training is defined under 30 CFR § 46.6). Similar to the training for
newly hired miners, when a miner changes jobs they must have new task training. New
task training is required for any miner who has been reassigned to a task in which they
have no training (new task training is detailed under 30 CFR § 46.7). For all miners, 8
hours of annual refresher training is required. The annual refresher training will reinforce
the key health and safety hazards miners are exposed to at the mine (annual refresher
training is listed under 30 CFR § 46.8). The required training exists to ensure that all
workers in the mine have knowledge of the health and safety risks they are exposed.
Federally mandated training does have drawbacks. The content of the necessary
training is not expressly detailed, and left to the individual mine operators to determine
the necessary training content. By mandating training based on time rather than content
could lead to trainers to spend more time focusing on the hours of training remaining, and
not cover enough material. There are also no mandated procedures for ensuring that the
trainee learned any material.
3.2 – Current Miner Training
Current literature about available techniques for miner training was reviewed to
determine what is being accomplished, and what can be accomplished through training.
It was found that current training is not being conducted in an effective way, and that
miners are not exposed to better training methods. Miner training is currently approached
as a “one size fits all” model which is not appropriate. Differences in learning between
younger and older workers are not addressed in current miner training, and the most
effective method of training (hands-on training) is not commonly utilized.
16
|
Virginia Tech
|
Kowalski and Vaught performed a study on the principles of adult learning. The
study showed that roughly 70% of all training consists of only an instructor and simple
demonstrations, and that most mine “trainers frequently do not teach the way adults
learn.” It was found that adults respond best through “personal experience, group
support, or mentoring”(Kowalski and Vaught 2002; Peters 2002). A five point checklist
for developing a training curriculum was proposed and is outlined in Figure 3.1. It was
shown that “adult learners are task-centered and problem-centered” which leads to the
trainee becoming focused on a problem and “so are solution-driven.” It was also
determined that effective training programs for adults were “active and experienced-
based” (Kowalski and Vaught 2002). These findings coincide with the results from the
study conducted by Burke et al, which indicate that hands-on training to be the most
effective for learning.
1. Clear Goals – What is the point of training? What
are the expected outcomes of the training?
2. Content – What content will support the stated goals?
3. Appropriate delivery mechanism – What is the best
delivery mechanism for the chosen content? Will the
delivery mechanism add or subtract to the value of the
lesson?
4. Assessment – How will you determine if the trainee
has learned the content? How will you know if the
goals have been achieved?
5. Remediation – If the trainee does not grasp the
content, what will the intervention be?
Figure 3.1 - Checklist for developing a curriculum adapted from (Kowalski and Vaught 2002)
17
|
Virginia Tech
|
The studies reviewed show that hands-on training is superior to traditional
training methods. By analyzing 95 studies (with roughly 21,000 total participants) from
1971 to 2003, Burke et al. identified that “the most engaging methods of safety training
are, on average, approximately three times more effective than the least engaging
methods in promoting knowledge and skill acquisition.” It was also shown that “the most
engaging methods of safety training are, on average, most effective in reducing negative
outcomes such as accidents.” Burke defined “most engaging methods” as hands-on
training and behavioral modeling, “moderately engaging” was defined as instruction with
direct feedback, and “least engaging” was defined as lectures and videos (Burke, Sarpy et
al. 2006).
For safety training, it is important that the trainee’s learn and understand the material
presented to them.
Studies have also been conducted which analyze the effectiveness of computer
based training (CBT). One such study was conducted by Williams and Zahed, and
focused on the effectiveness of computer based training. They concluded that computer
based training was “as effective as the traditional lecture/discussion method” (Williams
and Zahed 1996). The effectiveness of the training method was defined as knowledge
acquisition. The study also concluded that “there was a significant difference in the level
of retention between CBT and lecture, with the CBT group performing better overall one
month following the training” (Williams and Zahed 1996). By finding that the
knowledge retention is greater with computer based training, the study shows that
traditional lecture is not necessarily the best model for providing training. The results of
the study also concluded that “computer anxiety had no impact on the level of knowledge
18
|
Virginia Tech
|
acquisition within the CBT group” (Williams and Zahed 1996). With computer anxiety
not impacting the computer based training program, it follows that computer training is
acceptable to use on individuals who have little to no computer experience. Since
participants using computer based training are also able to proceed at their own pace, the
knowledge gained from computer training does not rely on the trainee having any
computer experience. The Williams and Zahed study showed that computer based
training is as effective as traditional lecture for knowledge acquisition, and significantly
better in knowledge retention. These two factors indicate that computer based training is
an effective training tool which should be utilized when creating a safety training
program.
Two examples of utilizing virtual reality in miner training were reviewed. The
first study was an incident recreation system which worked to promote “a strong safety
culture … by accurate recreation of unsafe actions in the workplace, demonstrating
explicitly how they resulted in a fatality or significant injury” (Schafrik, Karmis et al.
2004). By showing the user the exact consequences of unsafe actions, the user will be
able to fully understand the consequences of unsafe actions. The major drawback of this
system is that the user may interpret the recreations as videos and not an actual incident
recreation. The second study utilizing virtual reality miner training reviewed was a
virtual pre-shift inspection of a haul truck. In this study it was determined that a VR pre-
shift inspection was “more flexible and cost effective than other available training
methods” (Ennis, Lucchesi et al. 1999). This system required the user to learn how to use
a six-axis controller and only presented a haul truck for inspection. Although there is
19
|
Virginia Tech
|
some level of interactivity, forcing the user to learn a new controller would take away
time the user could spend learning more about the content.
With 70% of all miners only experiencing the traditional lecture method of
training, the other (and more effective) methods are not being utilized. Training is
essential for safety, and should be the focal point of all safety measures. By not utilizing
newer techniques for safety training, the effectiveness of the training is negatively
impacted.
3.3 – Currently Available Haul Truck Safety Systems
Current techniques are in place to reduce the occurrence of haul truck accidents.
The current safety techniques mostly focus on dump point proximity alerts, collision
avoidance systems, and braking systems. The primary goal of all techniques is to reduce
the number of haul truck accidents and increase safety.
Dump point proximity alerts consists of cameras and warning lights. Camera
systems have a camera mounted in the rear of the truck and a monitor in the cab giving
the driver the capability to see exactly what is behind him while the haul truck is in
reverse (MSHA). Additional training usually coincides with the implementation of
camera systems. The training allows the operators to better judge distances while looking
through the monitor. Some camera systems have the monitor disabled while the haul
truck is not in reverse. This is to lower the chance for the operator to become distracted
by looking at the monitor. Warning lights consist of a laser and a light. When the haul
truck backs into the dump point, the light comes on to notify the operator that they are at
the dump point. Both camera and warning light systems are to be used in conjunction
20
|
Virginia Tech
|
with adequate berms. In the event that the proximity alert malfunctions, training is
needed so that operators will not solely rely on cameras and lights to determine the dump
point.
Collision avoidance systems are primarily used to reduce the frequency of
equipment collisions. Common collision avoidance systems include radar systems, radio
frequency signal detection (RFID), global positioning satellite (GPS), and video camera
systems (MSHA). Radar systems are a low-cost collision warning system. The fallback
of using only a radar system is that rocks, highwalls, and loaders trip the alarm. It is most
often recommended that a radar system be used in conjunction with video cameras, so
that the driver can see exactly what sets off the alarm. RFID systems require that all
equipment and miners wear RFID tags which broadcast a specific radio frequency. When
two tags get too close together, a warning will sound. The major fallback of RFID is that
the specific location of the other person/vehicle is not known. GPS systems require all
vehicles to be mounted with a GPS receiver. Cost and coverage are the major fallbacks
to a GPS system. Video based collision detection systems work by having cameras
mounted on the haul truck and a monitor inside the cab. The driver is able to see the
blind spots of the truck on the monitor. Fallbacks for video based systems include the
possible distraction of the driver from looking at the monitor. While no one technology
stands above the rest, combinations of the aforementioned systems and other technology
do look promising. Combining radar systems or RFID with video cameras will allow the
driver to see exactly where other people and equipment are located. Combining GPS and
wireless networking technology allows real-time tracking of every piece of mobile
equipment in a mine (Dagdelen and Nieto 2001).
21
|
Virginia Tech
|
Braking systems are a standard feature found on all mobile equipment. There are
three main types of baking systems common to all haul trucks, the service brake, the
secondary (emergency) brake, and the parking brake. The service brake serves as the
main braking system used to stop the machine and hold it stationary, it acts much in the
same way as the brakes in a car. The secondary (emergency) brake works as a backup in
the event that the service brake does not work. Secondary brake systems often have less
braking capacity than the service brake and should only be used in an emergency. The
parking brake is only intended to hold a stopped machine. In the event that the parking
brake is used to stop a machine, it must be tested for parking capacity before the machine
can be used again. Some machines may come equipped with a retarder brake. The
retarder brake is used only to control the vehicle’s speed while traveling downgrades. All
manufacturers include detailed instruction on the proper maintenance and inspection of
braking systems. Should a problem in the braking system be found, only qualified and
trained personnel are to perform maintenance (MSHA).
Currently driver training and braking system regulation are the only MSHA
required techniques to increase truck haulage safety. The implementation and use of all
other technologies is solely dependent on the mine operator. The use of collision
avoidance and proximity detection systems are increasing as mine operators are noticing
the cost effectiveness of prevention systems to the cost of haul truck accidents. While it
is common for new technology to be used in accident avoidance, there is little technology
readily available that currently exists for driver training.
22
|
Virginia Tech
|
Chapter 4 – Development of the Training System
The training system developed addresses the common causes of haul truck
accidents and fatalities defined in Chapter 2. The virtual pre-shift inspection training
program will allow the user to gain a better understanding of what the parts need to be
inspected for, and what the possible outcomes would be if the parts were left unchecked.
With the added experience gained from the training program, the likelihood those
accidents resulting from ignoring mechanical problems on trucks will be reduced.
Virtual environments were chosen as the training medium for multiple reasons.
Due to the cost associated with removing a working truck from operation to practice pre-
shift training, it may be impractical for newly trained operators to obtain experience on a
haul truck. Immersive virtual environments were chosen over two-dimensional point-
and-click methods because the level of interactivity is greatly increased when the user
needs to navigate to the parts to inspect, and look under, on the side, and behind the
inspection parts (which would not be easily accomplished in two-dimensional methods).
X3D was chosen as the modeling standard to create the virtual pre-shift inspection due to
its status as a widely accepted open standard.
It should be noted that the training system developed should not be used as a
replacement for current training, but to complement the training new operators already
receive. Due to the variations between haul trucks from different manufactures, the
model attempts to identify the major inspection points on a generic truck. Because of the
generic model, it is essential that new operators have experience with pre-shift
inspections on the actual haul trucks they will be operating. It is also recommended that
23
|
Virginia Tech
|
the first few pre-shift inspections completed by a newly trained worker be overseen by a
competent person to ensure that the proper procedure is being followed. Complementing
conventional training with virtual pre-shift inspection training will reduce the number of
accidents which have mechanical failures, seatbelt misuse, and lack of training.
4.1 – Modeling in X3D
The modeling language used to create the model was X3D. X3D was chosen
since it is a well established standard for 3-dimensional web content and extensive
documentation for creating models are widely available online. Officially, “X3D is an
open standard for 3D content delivery” (Web3D "X3d Faq"). Due to its status as an
open standard, content viewers are widely available (most of which are freely available).
X3D is designed to be scalable to the specific application, only including the details
necessary to complete the specific model. The scalability is defined in the first few lines
of code as the “profile” (Web3D "X3d, H-Anim, and Vrml97 Specifications"). The
different profiles available are interchange, interactive, immersive, and full. Figure 4.1
illustrates the different profile levels, and includes some of the components available at
each level. The training program developed uses the immersive profile due to the use of
the interactive menus and audio playback.
24
|
Virginia Tech
|
Figure 4.1 - X3D Profiles adapted from (Web3D "What Is X3d?")
To create X3D content, the easiest way is to use a modeling program. To create
the model for the pre-shift inspection training, the program Flux Studio from Media
Machines was used and is freely available for academic and non-commercial use. Within
the Flux Studio program, parts could be created or imported into the scene and allowed
simultaneous views from different angles or cameras within the scene. Basic X3D
models include objects, lights and cameras which are all referred to as nodes. All nodes
can be grouped so that they can be translated, rotated, scaled, or modified together. Other
features included in X3D models include the animation of any node and environmental
effects such as fog. Animations are created by setting the translation, rotation, and scale
for nodes, at keyframes along the animation timeline. The X3D code stores the
placement of the objects at each keyframe which the content viewer uses to create the
animation and display to the user. Camera effects in animations are included by
animating the camera node itself. Occasionally, it is convenient to have one long
animation separated into multiple animation nodes which automatically trigger one
25
|
Virginia Tech
|
another. Setting up multiple animations will allow for changing the camera, and greater
synchronization with audio. When set up properly, the user will not notice the change
from one animation to another. Fog effects are available within the immersive profile,
but were not included in the training model.
The X3D file is an XML (extensible markup language) ASCII file which contains
the code to generate the 3D model. Figures 4.2 and 4.4 show the code for a cube, and the
modeled cube respectively. The cube (like cylinders, cones, and spheres) is a native
object in X3D. The code for Figure 4.2 and 4.3 were generated by the modeling program
(Flux Studio). Native objects are built into the X3D standard and require less code to
model than objects stored as an indexed face set (IFS). The IFS code for the same cube is
included as Figure 4.3. Indexed face set objects are broken into triangles, whose
coordinates are stored in the code. The IFC code in Figure 4.3 shows that there were
twelve triangles which made the cube. As it can be seen, the code to store a single IFC
object is much larger than the code to store a native object. Although a simple cube
modeled as an IFC needs more code, there are added advantages when working with
textures and lighting. Larger and more complicated objects actually are more efficiently
stored as single IFC objects, rather than the multiple native objects which would be
required to create it. For the training system, most objects were stored as indexed face
sets, due to the complex nature of the parts being modeled.
26
|
Virginia Tech
|
appropriate visibility” (U.S. CFR 30 § 59.9100). Headlights need to be inspected so that
any cracks, burnt bulbs, or broken connections are found and can be fixed prior to
operation. In the event that the headlight is not properly checked during the pre-shift
inspection, poor visibility can result in fatal accidents. Figures 4.5 and 4.6 illustrate real-
life haul truck headlights, and the modeled headlights respectively.
Figure 4.5 - Actual Headlights Figure 4.6 - Modeled Headlights
4.2.2 - Ride Cylinders
Ride cylinders on haul trucks act as shock absorbers for the front of the haul
truck. When inspecting the ride cylinders during a pre-shift inspection, the bolts
connecting the cylinder to the frame needs to be checked. If any bolts are missing, the
truck should go out of service and be repaired. Steady leaks down the cylinder are also
cause for immediate removal from service. In the event that the ride cylinder failed while
in operation, the operator may loose control of the truck resulting in an accident. The
ride cylinders on an actual truck and on the modeled truck are shown in Figures 4.7 and
4.8 respectively.
28
|
Virginia Tech
|
Figure 4.9 - Actual Brakes Figure 4.10 - Modeled Brakes
4.2.4 - Tires
Haul truck tires are composed of many different pieces which all need to be
inspected prior to operation. The flange, or the “hub,” of the haul truck tire supports the
tire and provides the connection between the axel and the tires. The flange needs to be
checked for cracks. The actual tires need to be checked on each wall, and on the tread.
Lodged rocks and cracks need to be identified and may need immediate action prior to
operation. Some small cracks in the tire rubber pose no immediate threat and can be
resolved after the shift when discovered. Lastly, the lug nuts need to be checked to
ensure that none are missing. Should two or more lug nuts be missing in any group of
three, or if more than three total nuts are missing, the problem needs to be resolved before
operation. Failure of any part in the tire system can lead to a blowout and the operator
losing control of the truck. Figures 4.11 and 4.12 show the outside of the tire system in
both a real haul truck and the modeled haul truck.
30
|
Virginia Tech
|
be checked during the pre-shift inspection. In the event that one tie rod fails during
operation, the operator will have no control over that particular wheel and would likely
not have time to respond and stop the truck before it crashes. Tie rods for both an actual
haul truck and the modeled haul truck are shown in Figures 4.15 and 4.16.
Figure 4.15 - Actual Tie Rods Figure 4.16 - Modeled Tie Rods
4.2.7 - Bell Crank
The bell crank on a haul truck transfers the actions from the steering wheel to the
tie rods, and is the center of the steering system. The connections to the tie rods need to
be checked, as well as the pin bracket below the bell crank. Any problems found with the
bell crank needs to be remedied before the truck goes into operation. If the pin bracket
fails, there is the chance that the entire bell crank would become dislodged making both
front wheels unresponsive to any steering. Without the operator having control of the
steering, there is a great risk for accidents. Figures 4.17 and 4.18 show the bell crank and
pin bracket on a haul truck and on the modeled truck.
32
|
Virginia Tech
|
On haul trucks, mud flaps act to prevent mud and rocks from accumulating on top
of the fuel and hydraulic tanks. If a mud flap is missing, it should be replaced as mud
and rocks could build up on top of the tanks and cause them to collapse or detach from
the truck due to the added weight. Mud flaps for a haul truck and the modeled truck are
shown in Figures 4.21 and 4.22.
Figure 4.21 - Actual Mud Flaps Figure 4.22 - Modeled Mud Flaps
4.2.10 - Bed Cylinders
The bed cylinders on the haul truck act to raise and lower the dump bed. They
need to work in synchronization to ensure that the bed is raised leveled. The connections
between the frame and the dump bed need to be checked as well as checking for steady
leaks down the cylinder. If any connections problems, or leaks are found they need to be
addressed prior to operation. If a problem goes unchecked, damage to the truck could
result when raising the bed. In extreme cases the truck may tip over when only raising
one side of the dump bed. Figures 4.23 and 4.24 show the bed cylinders on a haul truck
and on the modeled truck.
34
|
Virginia Tech
|
Figure 4.23 - Actual Bed Cylinder Figure 4.24 - Modeled Bed Cylinder
4.2.11 - Rock Ejectors
On haul trucks that have double tires on rear wheels, rock ejectors are necessary
to prevent rocks from becoming lodged between the tires. The rock ejectors hang from
the dump bed in between the tires and will knock any lodged rock out of the tire. The
connection between the rock ejector and the dump bed needs to be checked prior to
operation. Should the rock ejector be missing, it should be replaced prior to operation. If
the rock ejector is missing, a rock could become lodged between the tires and puncture
one of the tires resulting in a blowout or deflation. Either case may result in the adjacent
tire not being able to handle the increased pressure and failing as well. This would result
in accidents involving the truck. The rock ejectors are shown on a haul truck as well as
the modeled truck on Figures 4.25 and 4.26.
35
|
Virginia Tech
|
4.2.13 - Struts
Struts serve as the connection between the truck frame and the rear axle to prevent
vertical movement. They help support the frame and should be regularly checked for
leaks prior to operation. The connections on the Struts should also be checked, and any
problem found should be resolved prior to the truck beginning operation. If one strut
should ever fail, the entire frame could become twisted resulting in an inoperable truck.
The struts on a haul truck and on the modeled truck are shown in Figures 4.29 and 4.30.
Figure 4.29 - Actual Struts Figure 4.30 - Modeled Struts
4.2.14 - Rear Lights
The rear lights include brake lights on haul trucks, and act to signal vehicles
behind the haul truck that it is braking. Similarly to headlights, they should be checked
for burnt bulbs, cracks, or loose connections are found and fixed prior to operation.
Broken brake lights could result in the rear collision of the haul truck and another vehicle,
which would result in an accident. The rear lights of a haul truck and the modeled truck
are shown in Figures 4.31 and 4.32.
37
|
Virginia Tech
|
Figure 4.31 - Actual Rear Lights Figure 4.32 - Modeled Rear Lights
4.2.15 - Dog Bone
The dog bone acts as a connection between the rear axle and the frame preventing
lateral movement. The dog bone should be checked for cracks and connections, and any
problems found should be remedied immediately. Should the dog bone fail during
operation, the entire rear axle could move independently from the frame and result in loss
of control and accidents. Figures 4.33 and 4.34 show the dog bone on a haul truck and on
the modeled truck.
Figure 4.33 - Actual Dog Bone Figure 4.34 - Modeled Dog Bone
4.2.16 - Hydraulic Tank
The hydraulic tank stores the hydraulic oil the truck uses for various components
such as the brake systems. The tank needs to be checked for any cracks and leaks, as
well as the gauges checked for appropriate oil levels. If problems are left unnoticed, oil
pressure could be lost during operation and components such as the brake system would
38
|
Virginia Tech
|
become unresponsive to the operator which would end in an accident. The hydraulic tank
of a haul truck and the modeled truck is illustrated as Figures 4.35 and 4.36.
Figure 4.35 - Actual Hydraulic Tank Figure 4.36 - Modeled Hydraulic Tank
4.3 – The Training System
The training system was developed in X3D and can be implemented over the
internet, or as a standalone application. The complete system will contain three major
parts: a virtual tour, a pre-shift inspection, and the results. The program was developed
assuming that the user will have gone through the standard classroom-like safety training
on haul trucks. To complete the program start to finish, the user can expect to spend
roughly 30 minutes.
The virtual tour will take the user on a guided pre-shift inspection. The inspection
points are shown and the corrective action is given. The inspection points were detailed
earlier in section 4.2. There is the opportunity, within the tour, to go back and revisit
parts which may have been unclear. The total tour lasts approximately ten minutes, and
at the end the user has the ability to freely move around and look at the haul truck. The
virtual tour only shows the correct working parts, leaving the user unfamiliar with what
the failed parts will look like. It is in this part of the program where the user learns about
39
|
Virginia Tech
|
with a broken haul truck, and asked to perform a pre-shift inspection. The broken parts
will be positioned around the truck in a random manner. The most common problems
(usually involving rocks lodged in tires) will occur more often than the less common
problems (such as the bell crank pin bracket failing). There will be no more than three
different parts broken during one pre-shift simulation. An example of the broken
hydraulic tank is shown in Figure 4.39. To identify the part as needing action
immediately, after shift, or no action at all the user will click on the part to see the action
window. The action window will allow the user to select to take immediate action, take
action after shift, or take no action. Each part which could fail will show an action
window when clicked. It is up to the user to decide if there is a problem, and if there is
what level of action needs to be taken.
Figure 4.39 - Broken Hydraulic Tank in the Pre-Shift Inspection
After the user completes the pre-shift inspection, a resulting animation will play.
If the user either missed a broken part, or identified a critical problem as an after shift
problem, the animation will display that particular part failing. If the user missed more
than one item, the animation will show the consequences of missing the more severe part.
A screenshot from the animation for the broken hydraulic tank is shown as Figure 4.40.
41
|
Virginia Tech
|
4.4 – Testing the Effectiveness of the Training Program
A pilot study using a limited number of participants was conducted to determine
if the training program was effective enough to begin testing with a larger population.
The pilot group consisted of six participants with varying degrees of experience (from
never having seen a haul truck to former haul truck operators) with haul trucks. All
participants were presented with the same broken haul truck during the virtual pre-shift
inspection portion of the training, and completed a post assessment survey of the training
system.
The post-assessment survey’s goal was to identify the strengths and weaknesses
of the training program. The survey consisted of six simple questions which provided an
outlet for the user to describe how effective they observed the training to be. The results
of the survey were separated into each section of the training system and overall
observations and comments of the system. The comments obtained from the surveys
provided a good insight into what needs to be modified, and what is appropriate for
further testing.
44
|
Virginia Tech
|
Chapter 5 – Conclusions
From 1995 through 2006, there were 108 fatal haul truck accidents in the United
States. The current measures available for increasing haul truck safety focus more on
collision avoidance, rather than prevention. With additional training to address some of
the most common factors contributing to fatal accidents, the frequency of fatal haul truck
accidents can be reduced.
All factors which contribute to fatal haul truck accidents must be considered when
developing a new training program. The study from Burke et al indicated that the most
effective training methods are hands-on training. It was determined by Kowalski and
Vaught that following a checklist of five steps when creating a training program will
make the training program more effective. Using this checklist, the training system was
planned out. First is clear goals, the goal of the virtual pre-shift inspection training is to
provide haul truck operators with additional training to successfully complete pre-shift
inspections. Second is content, the information about which parts to inspect and the
appropriate corrective action to take comprise the content for the training program. Third
is the appropriate delivery mechanism, virtual environments were chosen to convey the
content. Fourth is assessment, a post-test questionnaire will be given to the trainees to
test their knowledge of haul truck parts and their corrective action. Fifth is remediation,
if the assessment or completion of the virtual pre-shift inspection was unsatisfactory the
trainee must restart the training program.
The trainee will begin training by completing the virtual tour of a pre-shift
inspection. They will then complete a pre-shift inspection on a virtual haul truck. Based
45
|
Virginia Tech
|
on their actions during the virtual pre-shift inspection, the appropriate consequence
animation will play showing a report comparing the trainee’s responses to the pre-shift to
the actual parts which needed to be identified. The trainee will then complete an
assessment which will test their knowledge of haul trucks and pre-shit inspections. If the
responses are unsatisfactory, the trainee will return to training starting from the virtual
tour. If the responses from the user were satisfactory, they will complete the virtual pre-
shift inspection training and continue further with their standard training. Figure 5.1
diagrams the training programs process.
Unsatisfactory
Take the pre-shift Watch the
inspection virtual consequences
tour animation
Complete an
Begin Satisfactory End
assessment
View report on
Complete the
what was marked
virtual pre-shift
correctly/
inspection
incorrectly
Figure 5.1 - Process of the Virtual Pre-Shift Inspection Training Program
5.1 – Results from the Pilot Test of the Training Program
The results of the survey were separated into each section of the training system
and overall observations plus comments of the system. For the virtual tour the results
indicated that some of the on-screen text was difficult to read, but was very thorough.
The results concerning the virtual pre-shift inspection focused mainly on movement
controls. The responses indicated that the navigation was somewhat difficult and
46
|
Virginia Tech
|
Adam M. Schaum
EDUCATION Virginia Tech
Blacksburg, VA 24061
M.S. Mining and Minerals Engineering (Spring 2007)
B.S. Mining and Minerals Engineering (Spring 2006)
WORK Graduate Research Assistant (May 15, 2006 – May 9, 2007)
EXPERIENCE Virginia Tech, Blacksburg, VA 24061
• Worked primarily on a NIOSH project through the Virginia Center for
Coal and Energy Research
• Contributed to a Goodyear project in the area of telegeomonitoring
(TGM)
• Assisted in preparing notes, teaching, and grading in various
undergraduate courses in Mining and Minerals Engineering
Treasurer (May 2005 – May 2006)
Burkhart Mining Society, Blacksburg, VA 24061
• Implemented a new standard for keeping financial records
• Tracked the accounts for the student societies:
o Burkhart Mining Society (Student chapter for SME)
o Intercollegiate Mining Competition Team
o International Society of Explosives Engineers (ISEE)
o National Sand, Stone, and Gravel Association (NSSGA)
• Designed logos and assisted in the procurement of new merchandise
Mining Engineering Intern (May 15, 2005 – July 29, 2005)
Vulcan Materials, Hanover, PA 17331
• Performed quality control tests
• Assisted in checking screens
• Assisted in plant maintenance
• Performed miscellaneous clean-up tasks around the site
Mining Engineering Intern (June 1, 2004 – July 29, 2004)
Martin Marietta Aggregates, Boonsboro, MD 21713
• Assisted in mechanical shop work
• Carried out plant maintenance
• Performed minor clerical work on fuel usage data and on-shift hours
• Gained a variety of experience in the aggregates industry
76
|
Virginia Tech
|
ACKNOWLEDGEMENTS
I would like to express my deepest gratitude and thanks to my advisor and my mentor Dr.
Gerald H. Luttrell for his guidance and valuable insight throughout this research. He supervised
this dissertation with great patience and keen interest. He contributed to it through his invaluable
suggestions and criticism. The valuable comments and suggestions of the committee members
Dr. Roe-Hoan Yoon, and Dr. Greg Adel are also gratefully acknowledged.
The financial support from the U.S. Department of State and Mining and Minerals
Engineering Department is greatly appreciated.
I would like to extend my sincere appreciation to Mr. Robert C. Bratton for his enormous
help and support in all aspects throughout my studies. I am thankful to Dr. Rick Honaker and Mr.
Tathagata Ghosh for their valuable suggestion and support during my field test work in India.
I would like to give sincere thanks to Mr. Shankar Ramaswamy, Mr. Kadri and the whole
staff of Eriez India Ltd. for their help in getting all the road permits and arranging transportation
to move the pilot scale testing unit at different sites in India. Special thanks Mr. Ganeshan for his
enormous help in solving electrical issues at the field sites in India.
I also like to acknowledge the cooperation and support extended by the management and
staff of Aryan Energy, Bhushan Power and Steel and Kargali Washery (Coal India Ltd).
My sincere gratitude to Dr. Erik Westman for helping me in understanding the model
software and solving problems during the simulations. I am thankful to Kathryn, Carol and Gwen
for arranging my travels and financial documents, also being patient with me and helping me out
whenever I needed.
I would like to dedicate my thesis to my parents and family staying miles away but
praying and wishing for my best. Their belief in me motivates me all the time and makes me
iii
|
Virginia Tech
|
According to the World Energy Outlook 2006, non-Organization of Economic
Cooperation and Development (OECD) countries will consume 6.4 billion metric tons of coal in
2030, an increase of 3.7 billion tons over the reference case of 2003. The increased coal use will
generate 13.6 billion tons of CO , over and above what is being released to the atmosphere today
2
by non-OECD countries. This increment will be twice the amount of the CO generated in North
2
America in 2003 (6.8 billion metric tons).
The Asia Pacific Partnership (APP) on Clean Development and Climate was formed to
address this issue, and its approach is to help the developing countries in the region to adopt
clean energy technologies and thereby minimize the greenhouse gas (GHG) emissions. This is
consistent with the 1997 Kyoto Protocol, which mandates international cooperation to encourage
worldwide implementation of GHG abatement technologies. In general, net efficiencies of coal
use in non-OECD countries are considerably lower than in developed countries. Therefore,
transfer of clean coal (CC) technologies and diffusion of expertise is the key to enable non-
OECD countries to reduce GHG emissions from coal combustion.
1.1.1 Need for Dry Processing
Several International Energy Agency (IEA) publications have suggested that increasing
the availability of high quality coals in India is an essential step toward the deployment of state-
of-the-art clean coal technologies. Unfortunately, Indian coals are poor quality with high ash
content and are difficult to clean, due to the ash-forming minerals being finely disseminated in
the coal matrices. Much of the coals burned for power generation are raw coals containing 35-
50% ash. However, these power plants were designed originally to handle 25-35% ash coal, thus
resulting in problems such as low thermal efficiencies, high operating and maintenance costs,
erosion, difficulty in pulverization, low radiative transfer, excessive amount of fly ash containing
2
|
Virginia Tech
|
large amounts of unburned carbons, etc. Further, transportation of high-ash coals is energy
intensive, which causes shortages of rail cars and trucks.
The washing of thermal coal in India is typically carried out to target less than 34% ash.
In 2001, the Ministry of Environment and Forest (MEF) promulgated new regulations mandating
coals must be cleaned to less than 34% ash content if transported more than 1,000 km from pit-
heads, or if burned in urban areas, environmentally sensitive or critically polluted areas
irrespective of their distance from the pit-head (CPCB, 1997). The coals consumed at the pithead
and within a rail distance of 1000 km can be burned without washing.
Another potential problem in applying coal cleaning technologies is that conventional
processes rely on the use of water as the separating medium. Because wet cleaning adds water to
product coal in the form of surface moisture, only the coarse size fractions are cleaned and the
un-cleaned fine coal is added back to the cleaned coarse coal. Part of the moisture is drained off
during transportation, particularly when the coal is transported over a long distance greater than
thousand kilo-meters. On the other hand, the moisture content remains high when the shipping
distance is less than 700-800 km. Therefore, the wet-cleaning process is difficult to justify for
pithead plants. Further, water is a scarce resource in most of the coal mining regions of India.
It is widely accepted that washing would dramatically improve the calorific value of a
coal, its size-consist, and other qualities. The beneficiated coal can reduce erosion rates by 50-
60% and maintenance costs by 35% (Couch, 2002). More importantly, use of beneficiated coals
can increase thermal efficiencies by 2-3% on existing pulverized coal (PC) boilers, and possibly
as much as 4-5%. A change in efficiency from 28 to 33%, for example, can reduce CO
2
emissions by up to 15%. According to Couch (2002), India could reduce CO emissions to nearly
2
45% of its present level by using state-of-the-art technologies related to coal quality,
3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.