University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Virginia Tech | Andrew W. Storey Spatial Data Analysis
4 Spatial Data Analysis
4.1 Introduction of Methods
To begin the analysis of the data recorded during the rockfall testing, the most
appropriate analytical methods were investigated. The data collected during the testing can be
considered multivariate in nature meaning that each observation has several associated variables.
For each rock, a value was recorded for length, width, impact distance, rollout distance, etc. This
data is valuable for understanding the nature of a rockfall, but unfortunately, the complexity of
the problem increases with the addition of more variables (Davis, 2002). Each variable can be
thought of as an axis in multidimensional data space, but even the visualization of a graph with
just three axes can be a challenge (O'Sullivan & Unwin, 2003). How much more difficult is
visualization of a data set with six axes, like the one described in this paper?
To solve this problem, researchers have developed what is called multivariate analysis.
Multivariate analysis enables researchers to wade through the abundance of variables and data
more easily (Davis, 2002). Therefore, statistical methods of multivariate data analysis can be a
very powerful tool for breaking down a data set of the kind described in this paper.
For analysis of this data, two specific methods were chosen with the goal of
understanding the main factors of a rockfall. The first technique used was Principal Components
Analysis in order to identify the most important factors affecting a rockfall. Secondly, Cluster
Analysis was used to understand the structure of the data. The author has not found examples of
these techniques applied to the study of rockfalls. Therefore, one aspect of this analysis is
evaluation of these spatial data analysis methods for analyzing rockfall data.
Using the collected data, Principal Components Analysis (PCA) and Cluster Analysis can
be performed in order to discover the relationships between the measured fall data. These spatial
data analysis methods are suitable for this type of problem because of the variability inherent to a
rockfall. For example, the same rock can be dropped from the same location and result in many
different fall paths and final locations. The variability of the rock shape and rotation coupled
with the uneven nature of a quarry wall result in a multitude of possible fall paths. PCA reduces
the difficulty of the problem by identifying the major factors that influence a rockfall from the
original data set (O'Sullivan & Unwin, 2003). Cluster Analysis applies taxonomy to the data
with the goals of prediction of future events and identification of cause (Everitt, 1993).
22 |
Virginia Tech | Andrew W. Storey Spatial Data Analysis
Complete treatment of PCA and Cluster Analysis require more space than available in
this paper. A good discussion of both with ties to geologic issues such as the one studied in this
paper can be found in Davis (2002).
4.1.1 Principal Components Analysis
PCA is a technique based on the eigenvectors of a similarity matrix of the data, often a
correlation matrix. PCA begins by creating this similarity matrix, and the matrix eigenvalues are
the Principal Components (PC) of the data set. The number of eigenvalues equals the number of
variables; therefore, a data set will have as many PC’s as measured variables. In spite of this
fact, the PC’s are not the same as the original variables. PCs are underlying influences to which
the measured variables point. Therefore, the results are open to a degree of interpretation
because the results are subjective (Davis, 2002).
The PC’s are based on eigenvalues, the principal axes of an ellipsoid created from the
correlation matrix. The eigenvalues are orthogonal, unlike the vectors from the original data set
(Campbell, Principal Components Analysis, 2010). Due to their orthogonality, the eigenvalues
are uncorrelated which allows the major influences on the issue to be observed (Jackson, 2003).
Using the PC’s, each observation can be linearly transformed into a value uncorrelated with the
remainder of the set (Jackson, 2003). This uncorrelated data set is useful for further analysis,
such as Cluster Analysis, because the error associated with the correlation in the data set will
have been eliminated.
The procedure is designed so that the PC’s account for the variability in the data set. By
definition, the first PC accounts for the largest percentage of variation in the data. The second
PC accounts for the variance not explained by the first PC, and so forth. Therefore, summing the
variance explained by each PC will yield the total variance in the data set. Additionally, the
PC’s may allow for simplification of the data. If the first three PC’s account for 80% of the
variance, only the scores for those three factors need be used in further analysis, for example.
As described above, PCA has two main benefits: the creation of an uncorrelated data set
and the identification of the main factors which cause the observational variance. An
uncorrelated data set is important for many data analysis techniques, but correlation is inherent in
imperfect measurement techniques. PCA fixes this issue. PCA also highlights the most
23 |
Virginia Tech | Andrew W. Storey Spatial Data Analysis
important factors influencing the observations, even if interpreting the PCs is difficult or
subjective (O'Sullivan & Unwin, 2003).
4.1.2 Cluster Analysis
Grouping samples based on similar characteristics is important to discovering the
meaning behind the raw data. A table full of measurements does not give a researcher any
insight. By arranging the data into related sets, a researcher can better understand the issue
through a deeper knowledge of prediction and aetiology, or cause (Everitt, 1993). Classification
is a tool that begins to organize the data set into something more useful.
One method of data classification is Cluster Analysis. Generally speaking, Cluster
Analysis takes a set of samples and groups the data according to similarity of measurement so
that each group is “homogenous and distinct” (Davis, 2002, p. 487). The procedure begins by
creating a matrix of similarity between each sample and every other. This matrix can be
composed of the original data or the uncorrelated scores from PCA. To begin defining the
clusters, the distance between measurements must be calculated. This step is accomplished with
a measurement of similarity; a common one is the Euclidean Distance, which works well for
ratio and interval type data (Campbell, Cluster Analysis, 2010). A linkage strategy must also be
applied to the matrix to combine the individual samples into similar clusters based on the
distance calculated between samples. Many linkage strategies exist including Centroid, Simple
Average, and Minimum Variance. Once the samples have been sorted into clusters, the data can
be displayed graphically and analyzed.
4.2 Methodology
The basic procedure performed for this analysis is described below. Microsoft Excel was
used as an initial system of data storage and presentation. All statistical analyses were performed
with NCSS Statistical software. Finally, Excel was also used for displaying the statistical results.
4.2.1 Principal Components Analysis
PCA has been selected to begin analyzing the rockfall testing data. Microsoft Excel has
been used for data management and display, and NCSS Statistical software has been employed
to perform the actual calculations.
24 |
Virginia Tech | Andrew W. Storey Spatial Data Analysis
After bringing the data into NCSS, the PCA command was chosen from the Multivariate
Analysis menu. In the Principal Components window, the variables length, width, impact
distance, rollout distance, wall height, and wall angle were chosen as the inputs for each tested
rockfall. Figure 4.1 shows the parameters chosen for this analysis in NCSS.
Figure 4.1: PCA Parameters
Once the inputs were chosen, the command was executed. The results were transferred back to
Excel to be displayed for analysis. Additionally, the uncorrelated principal component scores
were recorded for use in the subsequent cluster analysis.
4.2.2 Cluster Analysis
Cluster Analysis has been chosen to analyze the data collected from the rockfall testing.
This analysis has been facilitated by the use of Microsoft Excel for data management and display
and NCSS Statistical software for cluster analysis
The data for each sample was first recorded in a Microsoft Excel spreadsheet. This data
was then brought into NCSS Statistical Software. In this program, the K-Means clustering
command was chosen. In the Cluster Analysis window, the desired options were selected with
25 |
Virginia Tech | Andrew W. Storey Spatial Data Analysis
the goal of finding three clusters. NCSS then outputted various statistical measures from the
analysis and grouped the samples into the desired three clusters. The data was then copied into
Excel where cluster graphs were created.
4.3 Results
Before accepting the results found from PCA and Cluster Analysis, both methods were
evaluated specifically as analytical techniques. Prior uses of these techniques were investigated.
Although no examples were found for rockfall data, both have been used for geologic data.
Additionally, the rockfall data collected for this project meets the criteria of multivariate data for
which PCA and Cluster Analysis are appropriate methods. Secondly, the results of the
techniques were compared to prior research and observations. The similarity between the two
again suggests that PCA and Cluster Analysis are valid methods for rockfall analysis.
4.3.1 Principal Component Analysis
From the PCA run on the test results, three main principal components were found.
These three principal components account for 89.4% of the variation. The loading chart of PC-1
can be seen in Figure 4.2.
Figure 4.2: PC-1 Loading Chart
26 |
Virginia Tech | Andrew W. Storey Spatial Data Analysis
This first principal component is determined to be a factor related to wall configuration because
wall height and angle clearly affect impact and rollout distances. The designation of this
principal component is supported by observation of the rockfalls in the field. The configuration
of the wall affected the impact distance, especially, as well as the rollout distance. The wall
configuration controlled if the rock hit the wall at all and where it hit which, in turn, influenced
the impact and rollout distances. Rocks which hit the wall tended to impact and rollout farther
from the toe.
The loading chart of the second principal component can be seen in Figure 4.3.
Figure 4.3: PC-2 Loading Chart
Rock size and shape clearly control the second principal component, which is expected to be an
important characteristic in rockfalls. Field observations also support this principal component.
Rock shape influenced the angular momentum of a rock as it fell and bounced on the wall and
floor, and increased angular momentum led to greater rollouts. Rock size also influenced the
rollout because larger rocks tended to roll out farther.
The loading chart of the third principal component can be seen in Figure 4.4.
27 |
Virginia Tech | Andrew W. Storey Spatial Data Analysis
Clusters 1 and 2 appear to be well defined along an axis between PC 2 and 3. Apparently, the
combination of these two principal components is a key part of a rockfall for a normal quarry
wall. Observations of the rockfalls support this trend. Larger rocks gained more energy due to
their size and rolled out an average of farther than smaller rocks. The rocks in Cluster 1 are 14.2
in shorter in length and 11.2 in shorter in width, on average, than the rocks in Cluster 2, and
these rocks impacted 1.1 ft closer to the toe and rolled out 3.1 ft less. This trend is not supported
through observations of the largest rocks tested (4 ft and 5 ft) which tended to crater the toe on
impact and only roll one to two feet, if at all. The scatter in Cluster 3 shows that irregular walls
with large launch features change this aspect of a rockfall.
The results of the cluster analysis can also be plotted on other axes to help understand the
data better. Figure 4.8 shows the clusters plotted against impact and rollout distances.
Figure 4.8: Impact vs. Rollout Distances
Again, Cluster 3 is from the launch feature test. A correlation exists between the impact and
rollout distances; rocks that first hit farther from the toe tend to rollout farther. Although,
uncertainty exists to whether or not the rocks rolled any farther than ones that landed closer to
the toe. They may have had larger rollout distances because they simply started rolling farther
from the toe. Figure 4.9 shows a graph of the clusters plotted against rock length and width.
31 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
5 Alternative Design Methodology Analysis
5.1 Introduction of Methods
As stated previously, safety bench design is a complicated problem. As such, researchers
have studied this issue extensively and proposed what they feel is the best design. In the same
way, this report looks to expand this research area by presenting the best practices for safety
bench design in surface quarries using the results of testing in two quarries. A number of criteria
have been compared to the field testing performed for this work; the results of which will be
discussed in detail.
An important note about this analysis is needed regarding the placement of berms on the
end of a catch bench. Historically, berms have been placed on the crest of the bench. But after
discussions with MSHA, Luck Stone has changed their berm placement so that the toe of the
berm lies two feet from the bench crest. This location, as well as covering the crest side of the
berm in fine material, reduces the risk of a rock falling from the berm and hitting personnel or
equipment. As such, the berm designs used for the following evaluations have been placed two
feet from the bench crest.
5.2 Ritchie Criteria
5.2.1 Ritchie Criteria Introduction
Some of the first research undertaken in the area of catch bench design was published by
Arthur M. Ritchie, Chief Geologist of the Washington State Department of Highways, in 1963.
Ritchie rolled rocks off a variety of walls, observed the rock’s motion during the fall, and tested
a range of ditch configurations at the toe. The results of the study include a design guide which
can be used to select the appropriate ditch depth and distance between the pavement and the wall
toe.
The work performed by Ritchie and the Washington State Department of Highways
became the design standard highway slopes, and the guidelines were used extensively until a
change in highway regulations. Ritchie’s design includes a steep ditch, which is unsuitable for
vehicles, not allowed for most slopes based on requirements of the American Association of
State Highway and Transportation Officials (AASHTO) and in the current Manual on Uniform
Traffic Control Devices (MUTCD) published by the US Department of Transportation (Pierson,
35 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Figure 5.2: Ritchie Ditch, Quarry Bench Comparison
Additionally, the Ritchie’s rockfall testing was performed on quarry, highway, and natural slopes
which contain numerous launch features (Ritchie, 1963). Controlled blasting techniques were
not used on the test slopes, which accurately mimics a typical quarry wall (Pierson, Gullixson, &
Chassie, 2001). The design similarities are clear; therefore, the Ritchie design criteria have been
analyzed against the data collected for this research.
5.2.2 Ritchie Criteria Methodology
To begin the comparison, the Ritchie design parameters of ditch width and berm height
had to be found for each of the nine profiles tested for this research. Ritchie’s design guide is
based on slope height and overall slope angle. The nine profiles range in height from 42.7 ft to
45.5 ft and overall slope angle from 67.3° to 81.3°. These profile minimums and maximums
yield a Ritchie ditch width of 17.5 ft and a berm height ranging from 4.7 ft to 6.2 ft. Because
adjusting the berm height by 1.5 ft as the berm moves along a wall with changing slope angle is
not practical, this analysis has set the berm height at 5.0 ft in order to minimize the total bench
width due to the bench width a taller berm requires. Furthermore, the berm size has been
calculated using a 37° angle of repose, an average value for crushed stone (Bullock, Haycocks, &
Karmis, 1993; University of Portsmouth, 2001). Back calculating the bench width from
Ritchie’s ditch recommendations and the berm dimensions leads to a 26.1 ft bench.
37 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
To understand how the Ritchie bench design would have performed, cumulative
percentage retained curves were created for each profile. The curve shows the percentage of
rocks rolled off the profile that had impact or rollout distances less than or equal to a certain
distance from the toe of the wall. These curves are graphed for each profile along with the
Ritchie bench design for comparison.
5.2.3 Ritchie Criteria Results
The results of the comparison between the test data and the Ritchie criteria are presented
based on percentage of rocks retained within a certain distance from the toe. In the report,
Ritchie did not give guidance on the percentage of rocks that his design criteria would prevent
from leaving the ditch, but an Oregon Department of Transportation study from 2001 found that
the Ritchie criteria would retain 85% of rockfalls (Pierson, Gullixson, & Chassie, 2001). The
results of this analysis will be similarly presented using percent retention. One note to keep in
mind is that the actual test data was collected on flat ground without a berm in place. It is
assumed that a rock encountering the inner portion of the berm will only be slowed. Therefore,
the test data are assumed to have rolled out farther than if a berm had been in place during
testing. On the other hand, rocks that land on the outer portion of a berm will not be slowed by
the berm. With these assumptions, the berm crest is a key location for analysis.
All profiles were included in the first profile group that was analyzed. Figure 5.3 and
Table 5.I contain the cumulative percentage retained curves and specific data taken from the
curves, respectively, for all profiles.
38 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Figure 5.3: Ritchie Catchment Graph - All Profiles
Table 5.I: Ritchie Catchment Data – All Profiles
Location Impact % Retention Rollout % Retention
Inner Berm Toe 81.3 51.3
Berm Crest 97.3 85.1
Outer Berm Toe 100.0 96.2
Bench Crest 100.0 98.3
The values are the impact and rollout retentions at the berm crest, the Ritchie recommended
value. At the berm crest, 97.3% of impacts and 85.1% of rollouts would be retained within the
berm crest on the 26.1 ft bench. The 85% result agrees with the results ODOT’s study, but in
actuality, the rollout retention is likely higher because the test data were not slowed by the
presence of a berm during testing. Furthermore, the outer berm toe and bench crest rollout
retentions may be misleading because hitting the outer half of a berm may cause a rock to roll
farther than one landing on a flat surface.
The next profile group to be analyzed includes all profiles except the pronounced launch
feature profile, Profile 5 at Site 1. Figure 5.4 and Table 5.II contain the cumulative percentage
39 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
retained curves and specific data taken from the curves, respectively, for all profiles excluding
the launch feature profile from Site 1.
Figure 5.4: Ritchie Catchment Graph – All Profiles except Site 1 Profile 5
Table 5.II: Ritchie Catchment Data – All Profiles except Site 1 Profile 5
Location Impact % Retention Rollout % Retention
Inner Berm Toe 93.6 58.8
Berm Crest 99.0 90.3
Outer Berm Toe 100.0 96.6
Bench Crest 100.0 98.5
At the berm crest, 99.0% of impacts and 90.3% of rollouts will be retained within the berm crest.
Both values are higher retentions than when the launch feature profile is included in the analysis.
This result clearly indicates that launch features within a wall cause rocks to fall and roll farther
from the toe, which supports the results from other studies and rockfall observations.
Since geology has been shown to play a significant role in rockfalls, the data were then
separated by Site and analyzed. The granite quarry, Site 1 will be discussed first, and the
cumulative percentage curves and data table can be seen in Figure 5.4 and Table 5.IV,
respectively.
40 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Figure 5.6: Ritchie Catchment Graph – Site 1, All Profiles except Profile 5
Table 5.IV: Ritchie Catchment Data – Site 1, All Profiles except Profile 5
Location Impact % Retention Rollout % Retention
Inner Berm Toe 94.8 72.2
Berm Crest 100.0 96.0
Outer Berm Toe 100.0 98.6
Bench Crest 100.0 100.0
At the berm crest, 100% of impacts and 96.0% of rollouts will be retained within the Ritchie
design width. By excluding the launch feature profile in this analysis, the true impact can be
seen. Without the launch feature tests, all test rocks fell within the berm crest, and the rollout
retention increased by 10%. Estimating the impact of a berm on the test data leads the author to
believe that all rocks would have been retained with Ritchie’s design in this case.
Site 2, the diabase quarry will be discussed next. Figure 5.7 shows the cumulative
percentage curve, and Table 5.V shows the retention values from the key bench design locations.
42 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Figure 5.7: Ritchie Catchment Graph – Site 2, All Profiles
Table 5.V: Ritchie Catchment Data – Site 2, All Profiles
Location Impact % Retention Rollout % Retention
Inner Berm Toe 92.8 49.7
Berm Crest 98.9 84.5
Outer Berm Toe 100.0 95.1
Bench Crest 100.0 97.5
At the berm crest, 98.9% of impacts and 84.5% of rollouts will be retained within berm. The
higher impact retention and lower rollout retention indicate that the rocks from this site rolled
farther than the rocks at Site 1 by 2.7 ft, on average. This result could be attributed to geology,
but observations from testing indicate that the rocks at Site 2 hit tended to hit the wall with
increased frequency. Especially when the rocks hit low in the wall, the contact imparted strong
angular momentum. The increased angular momentum is the suspected cause of the difference
between the impact and rollout retentions. Again, the rollout retention near 85% agrees with
ODOT’s findings but would higher if a berm was included in the test.
43 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
5.3 Modified Ritchie Criterion
5.3.1 Modified Ritchie Criterion Introduction
Applying the Ritchie design criteria directly to an open pit mining environment can be
difficult because Ritchie only tested a limited number of bench/slope configurations (Ryan &
Pryor, 2000). Therefore, Dr. Richard Call of Call & Nicholas, Inc. developed the Modified
Ritchie Criteria (Equation 5-1) for surface mining trying to optimize bench width in light of the
tradeoffs between safety and cost (Alejano, Pons, Bastante, Alonso, & Stockhausen, 2007).
Equation 5-1
This equation is based on bench height, one of the most important criteria controlling how far a
rock will roll from the toe of the slope (Ryan & Pryor, 2000). An important note is that Equation
5-1 is written using meters. Most sources found by the author apply the equation this way,
including Call and Savely (1990) and Ryan and Pryor (2000). In contrast, one source lists the
equation using units of feet (Call, 1992). After comparing the sources and evaluating the data in
meters and feet, the author has decided to use the meters version, converted to feet, as this
version seems more appropriate.
The Modified Ritchie Criteria has become well used as a design guide as evidenced by its
inclusion in the SME Mine Engineering Handbook. This criterion has been designed for open pit
mining and is used in the industry; therefore, analysis of this method is warranted.
5.3.2 Modified Ritchie Criterion Methodology
To begin, the Modified Ritchie design width had to be found for each of the nine profiles
using Equation 5-1. The design widths range from 23.3 ft to 23.9 ft due to the changing bench
heights. The design width used for the discussed cases is the average of the widths of all profiles
included in the profile group. Because the Modified Ritchie Criterion is based off the Ritchie
Criteria, the same berm height of five ft with a 37° angle of repose will be used to allow for
equivalent comparison.
To understand how the Modified Ritchie bench design would have performed,
cumulative percentage retained curves were created for each profile. The curve shows the
percentage of rocks rolled off the profile that had impact or rollout distances less than or equal to
44 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
a certain distance from the toe of the wall. These curves were graphed for each profile along
with the bench design for comparison.
5.3.3 Modified Ritchie Criterion Results
The results of the comparison between the test data and the Modified Ritchie Criterion
are presented based on percentage of rocks retained within a certain distance from the toe. One
note to keep in mind is that the test data was collected on flat ground without a berm in place. It
is assumed that a rock encountering the inner portion of the berm will only be slowed.
Therefore, the test data are assumed to have rolled out farther than if a berm had been in place
during testing. On the other hand, rocks that land on the outer portion of a berm will not be
slowed by the berm. With these assumptions, the berm crest is a key location for analysis.
All nine profiles were included in the first profile group that was analyzed. Figure 5.8
and Table 5.VI contain the cumulative percentage retained curves and specific data taken from
the curves, respectively, for all profiles.
Figure 5.8: Modified Ritchie Catchment Graph - All Profiles
45 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Table 5.VI: Modified Ritchie Catchment Data - All Profiles
Location Impact % Retention Rollout % Retention
Inner Berm Toe 67.6 36.5
Berm Crest 94.8 79.0
Outer Berm Toe 99.5 93.2
Bench Crest 100.0 95.7
At the berm crest, 94.8% of impacts and 79.0% of rollouts will be retained. The berm crest is
15.0 ft from the toe of the wall which was back calculated from the Modified Ritchie bench
width of 23.65 ft. The percent retention of impacts is 0.5% less and the percent retention of
rollout is 7.2 % lower than the Ritchie design. This result shows that the Modified Ritchie
Criterion is less conservative. Again, the outer berm toe and bench crest rollout retentions may
be misleading because hitting the outer half of a berm may cause a rock to roll farther than one
on a flat surface.
The next profile group to be analyzed looked at all profiles except the pronounced launch
feature profile, Profile 5 at Site 1. Figure 5.9 and Table 5.VII contain the cumulative percentage
retained curves and specific data taken from the curves, respectively, for all profiles excluding
the launch feature profile from Site 1.
46 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Figure 5.9: Modified Ritchie Catchment Graph – All Profiles except Site 1 Profile 5
Table 5.VII: Modified Ritchie Catchment Data – All Profiles except Site 1 Profile 5
Location Impact % Retention Rollout % Retention
Inner Berm Toe 78.9 42.8
Berm Crest 99.5 84.9
Outer Berm Toe 100.0 93.9
Bench Crest 100.0 96.0
At the berm crest, 99.5% of impacts and 84.9% of rollouts will be retained. The berm crest is
15.1 ft from the toe of the wall which was back calculated from the Modified Ritchie bench
width of 23.69 ft. The percent retention of impacts is 0.5% more and the percent retention of
rollout is 5.4 % less than the Ritchie design. The value of removing launch features from the
wall can be seen from the comparison to the results including Site 1 Profile 5 (Figure 5.8 and
Table 5.VI). A wall with no/fewer launch features, especially major ones, will not cause rocks to
roll as far from the toe.
The granite quarry, Site 1, will be discussed first within the site specific comparison, and
the cumulative percentage curves and data table can be seen in Figure 5.10 and Table 5.VIII,
respectively.
47 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Figure 5.10: Modified Ritchie Catchment Graph – Site 1, All Profiles
Table 5.VIII: Modified Ritchie Catchment Data – Site 1, All Profiles
Location Impact % Retention Rollout % Retention
Inner Berm Toe 61.3 39.8
Berm Crest 90.5 80.6
Outer Berm Toe 99.0 96.3
Bench Crest 100.0 97.8
At the berm crest, 90.5% of impacts and 80.6% of rollouts will be retained within the Modified
Ritchie design of 15.1 ft from the toe for the berm crest. The design bench width is 23.71 ft.
While the rollout retention is similar to the data using all profiles, 1.6% higher, the impact
retention is 4.3% less. This fact is most likely the result of the increased impact of the launch
feature profile trials on the total number of trials due to the exclusion of Site 2 data. Compared
to the Ritchie results, the less conservative nature of this criterion is shown again because the
impact and rollout retentions are 4.8% and 5.6% less respectively. Next, Profiles 1-4 from Site 1
will be analyzed without the launch feature profile, Profile 5. Figure 5.11 and Table 5.IX show
the results of the analysis on this profile group.
48 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Figure 5.11: Modified Ritchie Catchment Graph – Site 1, All Profiles except Profile 5
Table 5.IX: Modified Ritchie Catchment Graph – Site 1, All Profiles except Profile 5
Location Impact % Retention Rollout % Retention
Inner Berm Toe 86.6 56.7
Berm Crest 100.0 95.2
Outer Berm Toe 100.0 99.0
Bench Crest 100.0 99.1
At the berm crest, 100% of impacts and 95.2% of rollouts will be retained within a berm crest
design distance of 15.2 ft and bench design width of 23.81. By excluding the launch feature
profile in this analysis, the true impact can be seen. Without the launch feature tests, all test
rocks fell within the berm crest, and the rollout retention increased by 14.6%. Estimating the
impact of a berm on the test data leads the author to believe that all rocks would have been
retained with this design in this case. Furthermore, these values compare closely with Ritchie’s
design criteria.
Site 2, the diabase quarry will be discussed next. Figure 5.12 shows the cumulative
percentage curve, and Table 5.X shows the retention values from the key bench design locations.
49 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
5.4 Ryan and Pryor Criterion
5.4.1 Ryan and Pryor Criterion Introduction
Once the Modified Ritchie Criteria was published, studies performed found the criteria to
be conservative (Ryan & Pryor, 2000). By starting with different factors, Ryan and Pryor
derived Equation 5-2.
Equation 5-2
The Ryan and Pryor Criterion is proposed to be less conservative than the Modified Ritchie
Criterion developed by Call. Therefore, this criterion was analyzed against the test data from
this study as a comparison.
5.4.2 Ryan and Prior Criterion Methodology
To begin, the Ryan and Prior design width had to be found for each of the nine profiles
using Equation 5-2. The design widths range from 18.7 ft to 19.2 ft due to the changing bench
heights. The design width used for the discussed cases is the average of the widths of all profiles
included in the profile group. Because the Ryan and Prior Criterion is based off the Modified
Ritchie Criteria, the same berm height of five ft with a 37° angle of repose will be used to allow
for equivalent comparison.
Just like the Ritchie and Modified Ritchie designs, the Ryan and Prior Criterion has been
evaluated using cumulative percentage retained curves were created for each profile. The curve
shows the percentage of rocks rolled off the profile with impact and rollout distances less than or
equal to a certain distance from the toe of the wall. These curves were graphed for each profile
along with the bench design for comparison.
5.4.3 Ryan and Prior Criterion Results
The results of the comparison between the test data and the Ryan and Prior Criterion are
presented based on percentage of rocks retained within a certain distance from the toe. One note
to keep in mind is that the test data was collected on flat ground without a berm in place. It is
assumed that a rock encountering the inner portion of the berm will only be slowed. Therefore,
the test data are assumed to have rolled out farther than if a berm had been in place during
51 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
testing. On the other hand, rocks that land on the outer portion of a berm will not be slowed by
the berm. With these assumptions, the berm crest is a key location for analysis.
All nine profiles were included in the first profile group that was analyzed. Figure 5.13
and Table 5.XI contain the cumulative percentage retained curves and specific data taken from
the curves, respectively, for all profiles.
Figure 5.13: Ryan and Prior Catchment Graph - All Profiles
Table 5.XI: Ryan and Prior Catchment Data - All Profiles
Location Impact % Retention Rollout % Retention
Inner Berm Toe 22.1 5.7
Berm Crest 80.3 49.9
Outer Berm Toe 97.4 84.7
Bench Crest 97.9 88.2
The full bench width using the Ryan and Prior criterion is 19.04 ft. At the berm crest of 10.4 ft
from the toe, the impact retention is 80.3%, and the rollout retention is 49.9%. These values are
17.0% and 35.2% lower, respectively, than the equivalent values calculated using the Ritchie and
Modified Ritchie Criteria. Only keeping one out of two rocks on the bench is also not acceptable
for a quarry design, but the presence of a berm would serve to increase the rollout retention.
52 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
The next profile group to be analyzed includes all profiles except Profile 5 from Site 1.
The curves can be seen in Figure 5.14 and the key data are found in Table 5.XII.
Figure 5.14: Ryan and Prior Catchment Graph – All Profiles except Site 1 Profile 5
Table 5.XII: Ryan and Prior Catchment Data – All Profiles except Site 1 Profile 5
Location Impact % Retention Rollout % Retention
Inner Berm Toe 25.9 6.7
Berm Crest 92.7 57.7
Outer Berm Toe 99.7 88.8
Bench Crest 99.9 90.9
Removing Profile 5 changes the bench design width to 19.07 ft, but the berm crest to remains
10.4 ft. The berm crest impact retention increases to 92.7%, and the rollout retention increases to
57.7% compared to the all profile values. Even without the launch feature profile in the data set,
the retention percentage is still not acceptable for a quarry bench design.
The data has also been analyzed by site. Taking Site 1 first, Profiles 1 through 5 have
been analyzed first. Figure 5.15 and Table 5.XIII contain the results of this profile group.
53 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Figure 5.17: Ryan and Prior Catchment Graph – Site 2, All Profiles
Table 5.XV: Ryan and Prior Catchment Data – Site 2, All Profiles
Location Impact % Retention Rollout % Retention
Inner Berm Toe 20.9 3.9
Berm Crest 91.4 48.3
Outer Berm Toe 99.5 83.6
Bench Crest 99.8 86.2
For this profile group, the bench width is 20.50 ft, and the berm crest is from the toe to 10.3 ft.
While the impact retention of 91.4% at the berm crest remains high, the rollout retention of
48.3% is 23.1% lower than the retention percentage from Site 1, Profiles 1-4. This difference is
most likely due to the increased wall contact observed during the testing at Site 2. Again, the
rollout retention percentage near 50% is not acceptable for quarry design.
5.5 Oregon Department of Transportation Design Guide
5.5.1 Oregon Department of Transportation Design Guide Introduction
In 2001, the Oregon Department of Transportation (ODOT) published a report detailing
the results of a research study performed from 1997-2001. This report summarized the data of
56 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
extensive rockfall testing into user friendly design charts for practitioners in the area of highway
slope design. ODOT recognized a lack of consistent slope design practices within government
agencies throughout the country (Pierson, Gullixson, & Chassie, 2001). The organization looked
to fill in the research gaps in the area of slope design to improve safety, reduce costs, and aid
practitioners. Therefore, ODOT led a project funded by seven state DOT’s and the Federal
Highway Administration to provide the research data needed to meet these goals (Pierson,
Gullixson, & Chassie, 2001).
This project consisted of many simulated rockfalls with the data being compiled into user
friendly design charts. Over the five years, 11,250 rocks were rolled in sets of 250 from slopes
with heights of 40, 60, and 80 ft and angles of 45°, 53.1° 63.4°, 76.0°, and 90°. Additionally, the
three catchment area configurations were used in the trials: flat, 6H:1V, and 4H:1V (Pierson,
Gullixson, & Chassie, 2001). The size of the rocks ranged from 1-3 ft in diameter. For each
rockfall trial, a rock was rolled from the crest of the wall and the rock size, wall height, wall
angle, catchment slope, impact distance, and rollout distance (See Figure 3.6 for a schematic of
impact and rollout distances). Once all the slope and catchment configurations had been tested,
ODOT compared the results to Ritchie’s criteria, RocFall computer simulations, and created
design charts for use in designing new slopes and evaluating current ones. Furthermore, the
report included qualitative observations of the rockfalls.
Similarly to Ritchie’s work, the ODOT study is geared for highway slope design, but, the
application of this research to a quarry highwall is logical step to take. Highway slope designers
and mine slope designers share the goal of keeping people and equipment safe from falling rocks
in the most cost effective manner. Therefore, the testing performed for this project will be
compared with the results included in the ODOT report.
5.5.2 Oregon Department of Transportation Design Guide Methodology
A different approach was taken to analyze the ODOT report data than the criteria based
on Ritchie’s design. The cumulative percent retained curves are still used. But because the
testing performed for this report is similar to the testing performed by ODOT, the data can be
directly compared. The impact and rollout data for slope/catchment configuration is included in
the ODOT report, but the ODOT data had to be interpolated for comparison to the test data
performed at the two sites in this study. The average wall slope for the nine profiles tested is
57 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
5.5.3 Oregon Department of Transportation Design Guide Results
The results of the ODOT study and this research have been analyzed by comparing the
cumulative percentage retained curves for rockfall impacts and rollouts. This data can be
accurately compared because the slope/catchment configurations for the two tests are equal on
average. Unlike the Ritchie based criteria, the ODOT results can be more accurately interpreted
because no berm was used during the trials. One assumption is that the addition of a berm will
only prevent rollouts. The results of the testing comparison can be seen in Figure 5.19.
Figure 5.19: ODOT/Luck Stone Retention Comparison Graph
A cursory look at the graph shows that the ODOT rockfalls impacted closer to the toe than the
Luck Stone testing by approximately 44%. This result makes sense due to the differences in the
blasting techniques used in the creation of the test walls. The ODOT study used presplit blasting
to create a smooth wall, whereas, the Luck Stone walls were blasted without any type of smooth
blasting technique. A normal production wall in a quarry will certainly contain more launch
features than a presplit face. These launch features will cause rocks to land farther from the toe,
as evidenced by the impact retention curves. The rollout retention curves are more closely
aligned, with the ODOT data showing a wider distribution. The curves cross at the value of
60 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
approximately 14.6 ft. The wider distribution could be attributed to a number of causes such as
rock shape, toe compaction, and number of launch features. Without more detailed information
about the ODOT testing conditions, pinpointing the exact cause of the discrepancy is not
possible.
In order to better compare the curves, the retention percentage at the aforementioned key
design locations have been calculated from the curves. Table 5.XVI holds this data.
Table 5.XVI: ODOT/Luck Stone Retention Comparison Data
Cumulative % Retained Footage
Parameter 75 80 85 90 95
Test Impact 9.4 10.3 11.8 13.5 15.1
Test Rollout 14.5 15.3 17.3 19.8 23.2
ODOT Impact 5.2 5.9 6.5 7.2 9.1
ODOT Rollout 14.2 17.0 20.1 25.6 34.4
By simply subtracting the ODOT data from the Luck data, the difference between the two
(See Table 5.XVII), as well as the percentage difference can be calculated (See Table 5.XVIII).
Negative values indicate that the ODOT data is farther from the toe than the Luck Stone data.
Table 5.XVII: ODOT/Luck Stone Retention Data Footage Difference
Cumulative % Retained Footage (ft)
Parameter 75 80 85 90 95
Impact 4.2 4.4 5.3 6.3 6.0
Rollout 0.3 -1.7 -2.8 -5.8 -11.2
Table 5.XVIII: ODOT/Luck Stone Retention Data Percentage Difference
Cumulative % Retained Footage (ft)
Parameter 75 80 85 90 95
Impact 44.8% 42.9% 44.9% 46.5% 39.7%
Rollout 2.4% -10.8% -16.1% -29.0% -48.4%
The percentage difference shows that the impact distances for the Luck Stone data are 37.9%-
46.5% greater than the ODOT data, resulting from smoother walls. On the rollout side, the Luck
Stone data starts out higher but gets steadily lower than the ODOT data quickly. Throwing out
the launch feature test does not improve the rollout retention data dramatically either (See Table
5.XIX).
61 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Table 5.XIX: Retention Data Percentage Difference w/o Site 1 Profile 5
Cumulative % Retained Footage (ft)
Parameter 75 80 85 90 95
Impact 34.7% 31.4% 29.1% 25.9% 20.3%
Rollout -6.3% -18.6% -32.8% -40.7% -49.3%
While the discrepancy between the impact retention improves, the rollout retention difference
becomes larger. Analysis of these results shows that a direct comparison between the two data
sets cannot be made. The conditions of the walls are too different. The additional launch
features of a quarry wall work to increase the impact distance by projecting rocks farther from
the toe. Furthermore, the smooth wall appears to cause farther rock rollouts, which could be due
to numerous factors.
5.6 RocFall Computer Simulation
5.6.1 RocFall Computer Simulation Introduction
Like many issues in the mining industry, the application of computer technology has
improved our understanding. For the issue of safety bench design, numerous computer programs
have been developed, including Colorado Rockfall Simulation Program (CRSP), RocFall, and
STONE. For this project, RocFall v4.503, produced by Rocscience, Inc. located in Ontario,
Canada, was chosen. This choice came from the program’s wide spread use, as evidenced by
previous research, ease of operation, and use by Luck Stone Corp.
RocFall simulates rock trajectories over a wall profile in two dimensions. The lumped-
mass method is used in the calculations, meaning rock shape and volume are not considered and
the rock mass is located at a single point (Alejano, Pons, Bastante, Alonso, & Stockhausen,
2007). Therefore, shape and size must be accounted for by adjustment of the input parameters in
the program (Rocscience Inc., 2003). To begin, the user either draws a slope profile in the
program or imports one from an outside source. A rockfall starting location(s) called a seeder is
then placed at the desired location on the slope. Once the various rock and slope parameters are
chosen, RocFall will simulate rockfalls for the desired repetitions in the style of a Monte Carlo
simulation. The values of rock energy, velocity, and, most importantly, end point are recorded
and can be graphed.
62 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Because RocFall was designed for simulating rockfalls and uses rollout distance as a
main output, this program is an obvious choice for comparison to the data gathered for this
project.
5.6.2 Simulation Methodology
For this project, RocFall is being used to extend the actual rockfall trials performed. The
limits of time and money prevent performing the number of trials on the scale that RocFall can
simulate. Once the parameters of the simulation have been selected to accurately reflect the
properties of the slopes, simulations can be performed for various slope and bench
configurations. Analysis of the simulation’s results will then allow selection of the most
appropriate bench configuration.
To begin, each profile was prepared for simulation. The nine drop point profiles taken
using the laser profiler were brought into RocFall. Three materials were then created using the
program’s Material Editor to represent the three surfaces a falling rock could encounter during
the testing: Rock, Talus, and Floor. Rock represents the intact hard rock that makes up the
highwall. From observations, the wall rock appeared competent and unweathered with some
build up of small size material on the horizontal surfaces. Talus represents material which was
piled up against the toe of the wall, a remnant of the previous blast pile. The talus material
tended to be uncompacted with a large size range; blocks over six inches in size down to fine
dust comprised the talus. Floor represents the material from which the floor was created. This
material was gravel-like and appeared to have a smaller size range from visual inspection.
Additionally, the floor material tended to be less compacted near the toe with compaction
increasing as distance from the toe increased, a trend most likely caused by increased equipment
traffic farther from the toe. Figure 5.20 shows an example profile from RocFall with the Rock in
blue, Talus in red, and Floor in green.
63 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Figure 5.20: Example RocFall Profile
Once the profiles were brought into the program, the material parameters were input for
each material type. RocFall allows the user to specify seven material parameters: the mean and
standard deviation for the coefficient of normal restitution (R ), the mean and standard deviation
n
for the coefficient of tangential restitution (R), the mean and standard deviation for the friction
t
angle (Φ), and the standard deviation of slope roughness. R and R are between zero and one
n t
and represent the normal and tangential components of surface elasticity, essentially how much
energy is absorbed or given back to an object upon collision. A value of one represents a
perfectly elastic collision. Φ is the critical angle which determines if an object will continue to
slide or come to rest on the surface (Rocscience Inc., 2003). Finally, slope roughness in RocFall
is calculated as a normal distribution based on the entered standard deviation. During the
simulation, this distribution is used to alter the initial angle of each surface to model roughness
(Rocscience Inc., 2003). The initial values for each of these parameters were chosen based on
site geology, material type, parameters used in previous Luck Stone Corp. simulations, and
recommendations provided by Rocscience Inc.
In addition to the material parameters, RocFall allows the user to select other simulation
options. The options of considering angular velocity and scaling R by velocity were chosen.
n
Considering angular velocity accurately models the testing because the rocks clearly spun during
the trials, and choosing this option is recommended by Rocscience Inc (Rocscience Inc., 2003).
Scaling R by velocity was chosen based on research showing that the characteristics of impact
n
64 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
change as velocity increases. At low velocity, the impact tends to be much more elastic than a
high velocity impact which is less elastic due to more rock fracturing and cratering of the impact
surface (Pfeiffer & Bowen, 1989). Additionally, this choice was supported by more accurate
simulations. Within the option of scaling R by velocity, the K factor, a constant used in the
n
calculation of the R scaling factor, was also selected. Seeder settings were chosen next. The
n
horizontal velocity was set for 0.1 ft/s to model the slight outward velocity given to the rock by
the excavator. Each seeder mean mass and standard deviation was set for the average
approximate rock weight for the rocks dropped over each profile calculated from the measured
rock size. For example, the volume of all rocks dropped over Profile 1 at Site 1 was calculated
along with the volumetric standard deviation. The average rock volume was then found and
multiplied by the rock density to yield average rock mass for Profile 1. Table 5.XX shows the
mean mass and standard deviations used for each profile.
Table 5.XX: RocFall Seeder Mass Inputs
Site Profile Mass (lb) Standard Deviation (lb)
1 220 18
2 461 63
1 3 470 63
4 460 26
5 298 25
1 353 44
2 476 48
2
3 338 51
4 426 56
Once all of these inputs were chosen, a sensitivity analysis was performed to help
understand the impact of each parameter on the results of the simulations. The parameters
included in the sensitivity analysis are R , standard deviation of R , R, standard deviation of R,
n n t t
Φ, standard deviation of Φ, standard deviation of slope roughness, seeder rock mass, standard
deviation of seeder rock mass, and the K factor. Each parameter was individually adjusted up to
30% more and less than the original value. An example graph displaying the results can be seen
in Figure 5.21 for Site 2, Profile 1, at the distance from the toe where 85% of the rocks were
retained on the bench.
65 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Figure 5.21: Example Sensitivity Analysis Graph
This graph shows that R has the greatest impact on the results of the simulation with R and the
t n
K factor close to tied for second most influential. Additionally, the restitution coefficients and K
factor show positive a positive correlation with distance from the toe, whereas Φ, the fourth most
influential, displays a negative correlation. The dip in R shown at 30% increase may be due to
n
scaling R by velocity because this trend was found upon repeated trials to check for error. The
n
other parameters show minimal impact on the results, even with large change.
The next step in this analysis was to calibrate the simulations to the actual test results.
This step ensures that the simulations will accurately model the actual testing. For this project,
calibration was performed using cumulative percent retained curves for the real test data. For
each of the nine profiles, a curve was created using the measured parameter, rollout distance,
which matches the predominant data output from RocFall. The curve, an example of which can
be seen in Figure 5.22, shows the percentage of rocks rolled off the profile that had rollout
distances less than or equal to a certain distance from the toe of the wall.
66 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
The RocFall curve is smoother than the real test data curve because 1000 trials were used in the
simulations. To calibrate the parameters, the two curves were compared at five points: 75%,
80%, 85%, 90%, and 95% retention using distance from the toe in feet as the variable. This
method compares the curves at key design locations and mimics a procedure found in similar
research by Alejano, et al (2007). This process was repeated for each profile, and the differences
between the curves, or error, was found. The overall goal of the calibration is to match the real
test data curve and the RocFall data curve as closely as possible. Keeping the results of the
sensitivity analysis in mind, the input parameters were repeatedly adjusted to minimize the total
error for each site as well as the individual error for each comparison location. The parameters
which yielded the minimum error are shown in Table 5.XXI.
Table 5.XXI: RocFall Input Parameters
Site 1 Site 2
Parameter
Rock Talus Floor Rock Talus Floor
Rn: Mean 0.45 0.15 0.35 0.52 0.29 0.34
Rn: St Dev 0.10 0.05 0.05 0.05 0.07 0.07
Rt: Mean 0.80 0.40 0.70 0.97 0.69 0.73
Rt: St Dev 0.10 0.05 0.05 0.05 0.07 0.07
Φ: Mean (°) 30 50 35 30 50 35
Φ: St Dev (°) 2 2 2 2 2 2
Roughness: St Dev (°) 4 10 5 2 10 5
K Factor 30 30
Sites 1 and 2 are granite and diabase, respectively, and the geologic differences between these
two rock types warrants using two sets of parameters.
In the calculation of error, the testing results were considered the baseline. Equation 5-3
shows the basic calculation of the difference.
Equation 5-3
RocFall data further from the toe than the test data end up negative and data closer to the toe are
positive. Table 5.XXII and Table 5.XXIII contain the distance from the toe of each comparison
location for each profile divided by site found during testing.
68 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
than the actual test data over a total of 356.40 ft for a percentage error of -1.63%. Again, the
negative value reflects that the RocFall simulations project the rocks to be farther from the toe
than the actual test results, which reflects a more conservative approach to the simulations.
The distribution of the RocFall curves were also compared to the distribution curves from
the actual test data. These curves were only compared using rollout distance because RocFall
does not output data on impact distance. Figure 5.24 contains the cumulative lognormal
distribution curves for the actual test data and the RocFall simulation data.
Figure 5.24: Cumulative Distribution Comparison
As the graph shows, the distribution curves are very closely aligned. The average difference
between the curves is 4.56%. This small difference again supports the choice of the input
parameters and the accuracy of the RocFall simulations.
The error cannot be brought to zero for a number of reasons. First, RocFall is only a 2-D
program, and each rock rolls down the linear profile. Observations from the testing show that
the rocks did move laterally as they fell in many trials, which RocFall cannot model. The profile
might leave out a launch feature that rocks did hit or include one that they did not. Second,
RocFall cannot accurately model rock size and shape, which leads to inherent error. Third, the
size range of the Talus material piled at the toe of the wall was very large. Any impact on a large
rock in the Talus would lead to a drastically different result than a rock hitting minus 1 in
material. Modeling this wide range of material characteristics in RocFall is difficult. The
71 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
Figure 5.28: Site 2 Design Charts
The charts show the difference from Site 1. At smaller widths, the rockfall percentage retained is
much more consistent than Site 1, but the opposite is true as bench width increases. This
discrepancy may stem from the smaller volume of talus at the toe for the Site 2 profiles.
Additionally, the larger berm was found to stop more rocks at Site 2 than Site 1. This result is
attributed to the 2 ft greater impact distance for Site 1. The 3 ft berm’s crest is further from the
toe (Figure 5.26) than the 5 ft berm, which enables it retain rocks with larger impact distances.
In addition to the design charts, general observations about the impact of the bench width,
berm height, and toe condition can be made. With regard for bench width, the trend is clear,
larger benches retain a higher percentage of rockfalls. For example, 100% of all simulated
76 |
Virginia Tech | Andrew W. Storey Alternative Design Methodology Analysis
rockfalls were retained on the 35 ft bench. The most significant increase in percentage retained
shown in all the charts comes between 15 ft and 20 ft. The height of the berm yielded
contradictory results between Sites 1 & 2. For Site 1, the 3 ft berm retained a higher percentage
of rockfalls than the 4 ft and 5 ft berms for the same bench widths and toe conditions, but the
opposite is true for the Site 2 simulations. This discrepancy is due to the different crest locations
of the 3 and 5 ft berms and the higher (2 ft) impact distance at Site 1. At both sites though, the 4
ft berm appears to be the best compromise between rockfall stopping power and berm size.
Finally, two toe conditions were tested: with talus and without talus. As previously described,
the talus material is the talus material is rock remaining from the previous blast which has been
left in a pile against the toe. One simulation was performed with the talus in place, just like the
actual testing, and another simulation was run after removing the talus from the profile, as shown
in Figure 5.29.
Figure 5.29: Talus vs. No Talus
The most significant impact of the talus material on the rockfall retention percentage only
appears for low percentages. Figure 5.30 shows an example graph from Site 2 showing the
difference between the talus and no talus simulations.
77 |
Virginia Tech | Andrew W. Storey Conclusions and Recommendations
6 Conclusions and Recommendations
Studying the results of the previously described analyses allows a number of conclusions
to be drawn. A number of key insights about rockfalls can be seen in the data. From this new
understanding, recommendations can be given regarding rockfall analysis and safety bench
design.
6.1 Spatial Data Analysis
After performing the Principal Components Analysis (PCA) and Cluster Analysis, a
number of conclusions can be drawn supported by the techniques and field observations. First,
both PCA and Cluster Analysis are valid techniques for evaluating rockfall data. The similarity
of the PCA and Cluster Analysis results to observations support this statement. In future studies,
PCA and Clusters Analysis can be used to better understand the underlying factors and structure
influencing rockfalls.
From PCA, wall configuration, rock dimensions, and rock energy were found to be the
underlying factors which control the majority of the variance in rockfall impact and rollout.
Therefore, these factors need to be accounted for in any wall evaluation method that is used in
order to accurately predict the rockfall potential and risk of a wall. The author recommends that
the following criteria be included in any wall evaluation method, at minimum:
1) Slope Angle (Wall Configuration)
2) Slope Height (Wall Configuration, Rock Energy)
3) Launch Features (Wall Configuration, Rock Energy)
4) Block Size (Rock Dimensions)
Cluster Analysis was performed in order to better understand the structure of the rockfall
data, and this goal has been met. Launch features are found to have a dramatic effect on a
rockfall. The tightly-grouped clusters from Site 1 and the launch feature test prove this
statement. Therefore, launch features must be carefully studied and/or remediated before safety
bench construction. These launch features result in larger impact distances which may cause
rocks to fly over the berm, if the berm is placed too close. Secondly, the distinction between
Clusters 1 and 2 indicates that larger rocks (2-3.5 ft) tended to impact and rollout farther by 1.1 ft
79 |
Virginia Tech | Andrew W. Storey Conclusions and Recommendations
and 3.1 ft, respectively. This increase is most likely due to the increased energy of the larger
rock because of the additional mass. To prevent these larger rocks from rolling off the bench, a
berm of at least 4 ft is recommended.
On a procedural note, Cluster Analysis using the uncorrelated PCA scores was found to
yield more significant results than simply using the measured variable. Therefore, cluster
analysis using the uncorrelated PCA scores is recommended for future analyses.
6.2 Alternative Design Methodology
The results of the analysis of the Ritchie Criteria, Modified Ritchie Criterion, Ryan and
Pryor Criterion, ODOT design charts, and RocFall simulations have led the author to a number
of conclusions and recommendations.
The RocFall simulation and the associated site-specific rockfall testing is the
recommended method for design of safety benches in quarries. The wall specific nature of this
method yields a design well suited for the target wall. The ability to simulate thousands of trials
using material data tailored for each wall is important for accurate design without wasting time
or money actually conducting those rockfall tests. The one drawback is that some site-specific
rockfall testing (1-3 days) such as the testing performed for this project must be performed at
each site in order to yield the most realistic results, but this cost is worth the reward. Once this
testing has been performed, the slope designer can accurately model any wall to achieve the
optimum design. RocFall simulations also allow the designer to design for multiple benches,
which is important in a quarry. Therefore, the procedure described in section 5.6 RocFall
Computer Simulation is recommended as the best design practice.
Specifically for walls of similar angle and height to the ones tested for this project, the
author recommends a safety bench width of 20 ft from the toe and a 4 ft berm on the bench crest
with 2-3 feet between the toe of the berm and the crest. From the design charts (Figure 5.25 and
Figure 5.28), this bench design will retain greater than 97% of all rockfalls.
With regard to the lognormal distribution data, the equations can be used as a quick
design reference and an accuracy check for RocFall simulations. The actual test data distribution
should be compared to the RocFall distribution to validate the selection of the input parameters
and ensure accurate simulations. Furthermore, the actual distribution data can be used to justify
a design to regulatory agencies.
80 |
Virginia Tech | Andrew W. Storey Conclusions and Recommendations
Recommendations can also be made for the other methods evaluated. Overall, the
Ritchie design criteria for highway slope design can be applied to quarry bench design. In the
studied cases, the bench width and berm design would retain a high percentage of impacts
(>95%) while preventing a high percentage of rocks escaping the berm (>85%), especially for
walls without prominent launch features. But depending on conditions, Ritchie’s criteria may be
too conservative as shown here and suggested by other research. The Modified Ritchie design
criterion is applicable for quarry bench design, also. In the studied cases, the bench width and
berm design would retain a high percentage of impacts (>90%) while preventing a large
percentage of rocks escaping the berm (>77%). This criterion is also less conservative than the
Ritchie design criteria by approximately 5%-8% in the rollout retention which may be beneficial
in certain situations. The Ryan and Prior Criterion is much less conservative than the Ritchie or
Modified Ritchie Criteria. By examining the critical location of the berm crest, the author
suggests that this criterion is far too aggressive to be used for quarry bench design, for example
the design will only retain one of two rockfall impacts in some cases.
The only caveat for these conclusions comes from the impact that a berm would have on
the rollout retention. Since no berm was used during the testing, a true determination on these
criteria cannot be made until more research is performed. The inclusion of a berm during testing
would only serve to increase the rollout retention. For example, the inclusion of a berm in the
RocFall simulations increased the retention by at least 5%.
With regard to the ODOT design charts, the author proposes that the differences in the
blasting practices during the creation of the wall prevent direct use of the charts for a quarry
wall. For impact distance, the Luck Stone test data appears to be approximately 44% greater
than the ODOT data for each location examined. For rollout distance, the ODOT data gradually
increase compared to the Luck Stone data as the retention percentage increases. Therefore, the
relationship between the ODOT and Luck Stone data is difficult to pin down. Additionally,
extrapolating this relationship to walls of different angles and heights would lead to inherent
error. Thus, the author recommends testing quarry walls with slope angles and heights
equivalent to the ODOT tests before attempting to apply the design charts to quarry bench
design.
81 |
Virginia Tech | Andrew W. Storey Conclusions and Recommendations
6.3 General Design Practices
Through the testing and analyses, the author has observed trends which correlate with
best design practices for a bench of any width. These practices will help to prevent rockfalls
from occurring, impacting and rolling as far, and becoming hazardous.
With regard to quarry bench design, the author recommends the following best practices.
First, a berm on the end of the bench is a must. This berm should be approximately 4 ft high.
The height will not allow rocks to roll over the berm, provide more catchment volume, and keep
the berm crest from being too close to the wall. The location of the berm should be 2-3 ft away
from the bench crest which will prevent rocks rolling over the berm and falling over the wall.
Furthermore, covering the crest side of the berm with fine material will reduce the chance of a
berm rock falling and hurting someone as well. This berm location will also build in a cushion
in case back break from blasting is more than expected. Second, the talus material at the toe
should be cleaned up as much as possible. Although the RocFall simulations show that talus
does not significantly impact rockfall results at high retention percentages, removing the talus
removes a main mechanism for converting fall energy into rolling energy. Additionally, the talus
material will naturally compact over time, which will tend to increase the rollouts of rocks hitting
the talus material. In place of the talus, evaluation of the lognormal distributions of the impact
and rollout distances suggest that placing loosely compacted, well draining material in a flat,
level layer on the bench 0 to 10 ft from the toe will reduce rollouts.
As discussed, blasting has a significant impact on the condition of a wall. Therefore, the
use of some form of controlled blasting is recommended for final highwalls. The most effective
type of controlled blasting must be determined on a wall-to-wall basis, but reducing the blast
damage on the wall will greatly reduce the potential for rockfall.
The results of the analyses show that prominent launch features increase the impact and
rollout distances of falling rocks by approximately 8 to 9 ft. For final walls, profiling the wall
with a laser profiler to look for launch features is suggested. If found, the feature should be
removed or the bench should be designed to account for the increased impact and rollout
distances.
82 |
Virginia Tech | CHAPTER 1 – INTRODUCTION
Each year, dozens of accidents and several fatalities occur in large equipment operations
when smaller equipment is run over by larger equipment. These accidents occur when a
smaller vehicle is in the blind spot or path of a larger vehicle and is crushed beneath its
tires. Lack of communication between equipment operators and poor visibility are
common causes of these accidents.
Accident problems will persist in the future as the size of large mobile equipment is
increasing over time to meet production needs. Haul trucks have grown in size up to 400
tons and can easily overshadow smaller equipment. The increase in size has also come
with larger blind spots and reduced visibility, both of which magnify the risk of run over
incidents. Haul trucks used in mining and construction operations, hava blind spots so
large that standard size cars such as a supply van could not be seen by the operator in
certain conditions. Some blind spot areas can be decreased though the use of mirrors and
camera systems in larger equipment. Despite the success of the use of mirrors and
camera systems in reducing run over incidents, these incidents still occur.
Imagine this situation in which a run over incident could occur. The operator of a 200
ton haul truck has just returned to his truck after taking a lunch break. The mine
supervisor had driven the haul truck to keep the production cycle at its maximum while
the operator ate his lunch. Upon completion of his lunch, the operator drives the mine
supervisor’s pickup truck and parks it next to the haul truck. The mine supervisor and the
haul truck driver switch vehicles and the haul truck operator climbs back into the 200 ton
haul truck. The haul truck operator attempts to put the haul trucks transmission into gear,
but the gear select lever will not move from neutral. Approximately five seconds later,
the mine supervisor’s pickup appears from the blind spot directly in front of the haul
truck. The mine supervisor had driven his pickup in front of the haul trucks path, which
was unknown to the haul truck operator at the time. Ten seconds later, the haul truck
operator is able to put the transmission into gear and proceed with his work. Had the
1 |
Virginia Tech | haul truck operator moved the haul truck forward when he initially attempted, the mine
supervisor and his pickup would have been the victims of a run over incident.
The potential accident described above was prevented through the use of a Proximity
Warning System and a transmission locking mechanism. In this case, both the pickup
truck and haul truck were equipped with GPS units and onboard computer systems. The
haul truck was also equipped with a locking mechanism on its transmission. When the
onboard computer system of the haul truck detected that the pickup truck was in the
proximity zone of the haul truck, it sent a signal to the locking mechanism to prevent the
transmission from being placed into gear until the pickup was out of the proximity zone.
This prevented the haul truck from moving forward and crushing the pickup.
The potential run over event was logged and plotted on the surface operation map using
GIS software along with all potential run over events that have occurred. GIS software
allows for areas in which a large number of potential run over events occur to be
analyzed to determine if any changes in the operation site will reduce the potential for a
run over event.
The haul truck in the above scenario is in what I consider the In-rest state. The In-rest
state is where the vehile is running, the vehicle is not in gear, and the vehicle is not
moving. The truck has a potential to move with little or no warning. A parked vehicle
will be in the In-rest state when the engine is started.
This report describes the possibility of integrating a transmission lock with a proximity
warning system to prevent run over incidents that occur from the In-rest state and
utilizing GIS to analyze data for safety and performance in large mobile equipment
operations. The system can be developed using basic computer systems, a long range
wireless network, GPS receivers, Proximity Warning System software, and GIS software.
The system is also available for all equipment in the operation and all equipment that
may visit the operation.
2 |
Virginia Tech | CHAPTER 2 – LITERATURE REVIEW
2.1 MSHA DATABASE AND FATALGRAM REVIEW AND ANALYSIS
Since 1987, 58 miners have died in accidents involving large haul trucks where reduced
visibility contributed to the accident cause (McAteer, 2000). The increase in the size of
haulage trucks and other equipment in construction and mining operations over the years
have lead to reduced visibility for drivers and the increase in the risk of run over
incidents. These incidents can result in severe injuries and in some cases fatalities
(MSHA 2004). The operator of a haul truck cannot see a vehicle or person close behind
or beside the vehicle. A pickup crushed by one of these trucks often resembles a flattened
aluminum can. The blind spots where operators cannot see vehicles or people around
them have increased in size as the equipment has become larger over time.
Between 1990 and 1996 there were forty-six accidents that occurred when the haulage
truck drivers’ vision was obstructed due to the configuration or location of the cab
(MSHA 2004). Of these accidents, fifteen involved large capacity haulage vehicles
running over smaller vehicles and crushing them, resulting in fatalities in all cases.
Eighty-seven accidents occurred when the haulage trucks ran into stationary objects,
equipment, or another haulage truck, in which eight fatalities occurred due to the
collisions. Some of these collisions may have occurred due to driver error; however,
accident investigations indicate that several accidents occurred due to poor
communication between drivers of there was an obstruction in the visibility between the
vehicles involved.
2.1.1 VEHICLE BLIND SPOTS
During the past six and one-half years, blind spot hazards have attributed to the cause of
fatal accidents (Fesak, 1996). A majority of the large haul trucks used today have a
skewed blind spot, which is that the blind spot on the left of the vehicle is smaller than
3 |
Virginia Tech | that of the blind spot on the right side of the vehicle. This skew is caused by the position
of the operators cab (Figure 2.1).
Figure 2.1: Truck Blind Spots (Boldt, 2005)
In most cases, the drivers cannot see the ground, other vehicles, or pedestrians for
distances that can be greater than 100 feet from the driver’s seat. While the distance is
only 16 feet from the operators cab for the operator’s side of the cab, the opposite side
has a blind spot distance of 105 feet. This is large enough for large objects to be hidden
from the view of the operator. On larger equipment, the blind spots can increase in size.
Large 190-ton and 240-ton haul trucks have blind spots large enough to hide pickup
trucks and supply vans from the view of the haul truck operator.
2.1.2 CURRENT PROXIMITY DETECTION SYSTEMS
Over the past several years, research projects by Dalagden, Nieto and Ruff have been
conducted to aid in the detection of vehicles and personnel. The first approach to help
increase operator visibility was to install mirrors. Mirrors are a significant safety aid;
however, they do not achieve adequate coverage of the blind areas, and operator
perception of size and distance between objects can be affected by the size and shape of
the mirror (Fesak, 1996). Recent advances in cab designs and locations as well as the
installation of discriminating warning devices, video cameras, and other state of the art
“blind area surveillance systems” have been done in to greatly reduce “blind-spot”
hazards. The fatalities that have occurred over the years where the drivers view is
4 |
Virginia Tech | obstructed may have been avoided with effective proximity warning devices, cameras,
mirrors or improved cab designs (Fesak G. 1996). MSHA studies have shown that video
cameras on the rear and side of the vehicle can improve safety around large vehicles
(McAteer, 2000). A monitor in the operator’s cab allows the operator to see blind spots
around the vehicle. However, the visibility of the cameras can be affected by fog, mud,
or glare on the video screens at night. A camera system can cost up to $7,000 depending
on the number of cameras and the special features installed. Typically, a single 320 ton
haul truck costs well over $2 million. The cost for camera technology is minimal
compared to the potential safety benefit that would occur from the usage of camera
systems. The addition of a camera system to large equipment does not prevent all run
over incidents. Current research is underway to develop numerous proximity warning
systems based on LIDAR, Radar, stereovision camera systems, and GPS (MSHA 2004).
The use of these detection systems has been limited due to the high cost associated with
the technology.
2.1.3 RUN-OVER CASE STUDY
On September 17, 2003, an incident involving a run over was reported at a coal surface
mine (MSHA, 2005). This particular run over incident involved a 190-ton haulage truck
and a Ford F-350 van (Figure 2.2). The van approached the haulage truck’s right side
and stopped to drop off supplies. The haulage truck moved forward, knocking the van
over and crushing it under the right front tire. Both the driver of the van and a passenger
of the van were fatally injured.
5 |
Virginia Tech | A GPS base proximity warning system works by reading GPS positions in NMEA format
from a GPS unit using an onboard computer, extracting position data (latitude, longitude,
and altitude) from the NMEA sentences, then converting the data into the local
coordinates. The system increases the GPS accuracy by using signal correction received
from a differential base station using a radio signal (also called DGPS) or by using the
more sophisticated RTK GPS technology, which can be more expensive. Once the
system solves for the vehicle location, the system calculates the position of the truck with
respect to other trucks by broadcasting its GPS location to those other vehicles using a
wireless network. The computer keeps track of vehicle proximity based on the vector
distance resulting from each of the GPS vehicle locations and the predefined proximity
distance, activating a warning signal when a vehicle is within the predefined distance.
The proximity zone can be dynamically adjusted depending of predetermined safety
factors and vehicle characteristics (truck weight, speed, visibility, terrain, and weather).
If visibility were low due to fog or bad weather, the zone can be increased to compensate
for poor visibility (Seymour, 2004). The proximity zone is a predetermined area or
“bubble” surrounding the equipment (Figure 2.4).
Figure 2.4: Proximity Zone (Miller and Nieto)
8 |
Virginia Tech | The latest development in the science of proximity warning is the application of GPS
hardware and software to display vehicle locations and calculate distances between
vehicles (O’Connor et al. 1996, Suh et al. 2003). The distances are easily calculated
once the NMEA sentences are collected from the GPS receiver and transmitted to other
vehicles in the proximity warning system.
2.2.1 GPS
The Global Positioning System (GPS) is a constellation of navigation satellites called
Navigation Satellite Timing And Ranging (NAVSTAR), that is maintained by the U.S.
Department of Defense (USGS, 1999). Many handheld GPS devices are used by outdoor
enthusiasts as an accurate tool for determining their location on the terrain. The GPS
receiver determines the current location on the Earth's surface by collecting signals from
three or more satellites, and through numerous signal calculations, determines the
location though process called triangulation.
The GPS receiver units that can be used with a proximity warning system can be any
standard GPS receiver that connects to a computer using either a serial port or a USB
port, and outputs the standard NMEA sentences. There are wide varieties of GPS
receivers available on the market with many different features available. This variety
allows for the selection of a GPS unit that will meet all the requirements needed for the
application it will be used with and for the selection for the desired level of accuracy of
the GPS location calculated.
There are numerous NMEA sentences, but the most common ones used are GGA, GSA,
GSV, RMC, and VTG (Gpsinformation.org, 2005). The most important NMEA sentences
include the GGA sentence which provides the current Fix data, the RMC sentence which
provides the minimum GPS location information, and the GSA sentence which provides
the Satellite status data. The GSV sentence provides the satellites in view data and the
VTG sentence provides the velocity information. Most standard off the shelf GPS
receivers will output a combination of NMEA sentences that can be used to provide
11 |
Virginia Tech | 2.2.2 WIRELESS NETWORKS
Currently there are several options available for a Wireless Network, such as IP radios,
cell phone modems, or CDMA wireless cards. With the recent advances in wireless
network technology, it is possible to use standard 802.11 wireless: this standard is an
inexpensive but less rugged option that uses standard radio pc-cards based on the 802.11
specification also known as (Wi-Fi) working with one watt amplifiers and omni-
directional antennas (Molta, 1999). The range expected using this approach is in the order
of several hundred meters, range is increased with the use of routers and repeaters.
In addition to the 802.11 standard, MANET wireless, which was originally developed for
military applications, can be used to form a network with individual nodes (Sung-Ju-Lee
et al., 2001). MANET is based on more rugged and also more expensive radios; however
a MANET signal range can reach several miles without using repeaters or routers and is
capable of hopping through its network nodes to increase the total range of an individual
node. Recent advances in the algorithms used in node hopping have increased the
reliability of peer-to-peer networks.
Figure 2.7: MANET (Buckner and Batsell, 2001)
13 |
Virginia Tech | 2.3 TRANSMISSION DESIGN
With the numerous types of mobile equipment available today, it is difficult to analyze
the individual transmissions used. However, it is possible to analyze the basic types of
transmissions used. Transmissions for large mobile equipment are available in automatic
and manual transmissions (ZF Friedrichshafen AG 2004). Figure 2.10 shows a general
example of an automatic and a manual transmission for Large Mobile Equipment.
Figure 2.10: Automatic (left) and Manual (right) transmissions for Large Mobile
Equipment (ZF Friedrichshafen AG 2004)
All transmissions serve the same basic purpose for mobile equipment. They change the
gear ratio between the engine and the drive train to allow for different levels of torque
and speed (Howstuffworks 2004). With the larger equipment that is used in construction
and mining, most transmissions are geared toward providing more torque and power to
the wheels than to providing the equipment with a greater speed. Manual transmissions
contain a gear shift lever that is used to place the transmission into gear. Manual
transmissions usually have several forward gears and a reverse gear. Automatic
18 |
Virginia Tech | 2.4 GIS
In today’s society, Information Technology has become era where the dissemination of
information from one location to another has become almost real-time for many
applications (Baijal et al, 2004). GIS (Geographical Information Systems) based
Information Technology has numerous applications in many different working fields,
from civil engineering to military use. Today’s GIS applications can be adapted to
almost any data type and any industry (ESRI, 2005). Most datasets can be easily
imported into a GIS program and utilized to produce maps of specific areas, as well as
provide analysis of many problems. The primary benefit of GIS is that it provides an
analysis tool, a storage solution, and a display of spatial and non-spatial data all in one
system. It takes the power of relational database software with the power of a CAD
based package to display spatial data.
2.4.1 GIS APPLICATIONS FOR DECISION MAKING
The ESRI GIS software package provides numerous tools in performing calculation for
decision making. Baijal et al, 2004, have shown that the process for determining optimal
positions for equipment can be done. In their paper, it is shown that using terrain data,
stream data, land use data, and probable enemy location data can provide the optimal
points for placing military bridges, deploying troops, or dropping off supplies. These
points are determined by using the GIS data and placing a weighted factor on each part of
the decision. Using this system, the area in question can be then mapped with an overall
optimization based upon the factors used. Satyanarayana and Yogendran, 2004 also
show how data can be calculated and utilized in making many decision and optimizing
operations. With GIS, the data can be collected, stored, analyzed, manipulated, and
presented for making many important decisions.
20 |
Virginia Tech | CHAPTER 3 – SOFTWARE DEVELOPMENT
Currently, there are few software packages available that use GPS for proximity warning
systems. Most avaiable software is in development and is currently developed specific
configurations, and may not contain the necessary components needed by this project.
Therefore, it was necessary to develop both the server software and the client software to
the needs of this project. This method will allow the use of the In-Rest locking
mechanism without having to modify any existing code.
The software created for this project was coded in Visual Basic.NET using Visual
Studio.NET 2003. The two programs developed during the research are the Server
software and the Client software. Within this project, the Server Software will be
referred to as the GPS Tracking Server, and the Client software will be referred to as the
GPS Tracking Client. The GPS Tracking Server Software will be run on only one
computer in a central location, while the GPS Tracking Client software will run on
multiple computers onboard any mobile equipment to be used in the GPS Proximity
Warning System. All the GPS Tracking Clients will connect to the GPS Tracking Server
to send and receive location data between moble equipment used in the GPS Proximity
Warning System. Both the client software and the server software were tested during this
research using simulated data as it was not possible to test the functionality of the entire
system.
Figure 3.1 shows the interaction between the GPS Tracking Server and the GPS Tracking
Client software.
22 |
Virginia Tech | The Form1.vb code consists of the majority of the calculations involved, the network
connection, and the program display. It also handles the data from the server to be used
in the calculations. The CR232.vb module controls the collection of the GPS NMEA
data from the GPS receiver. The ECEF.vb module is used to convert coordinates from
latitude, longitude, and altitude to Earth Centered Earth Fixed xyz, coordinates to be used
in the distance calculation. The DataCollect.vb module processes the GPS NMEA
sentences that are received from the GPS receiver and reduces them to only the location
data. The ProcessData.vb module calculates the distances between the local vehicle and
other vehicles in the system. The ActivateSystem.vb controls the locking mechanism by
checking to see if the conditions are met to lock the transmission.
The client application does all the local data processing and position display locally. It
takes incoming locations and processes the GPS location into ECEF xyz coordinates,
which are then used for the distance calculations between vehicles. Each local vehicle
calculates the distance between itself and every other vehicle using their onboard
computer system. To display the local truck location, speed, and direction, the
information received from the GPS receiver is processed and then displayed to the
operator locally (Figure 3.5)
31 |
Virginia Tech | Figure 3.5: GPS Client Software Display
This display shows to the operator the direction the equipment is heading, and the
location of other nearby vehicles within the proximity zone. In this case, the red zone is
within a 50 meter radius, the yellow zone is between a 50 meter and a 100 meter radius,
and the green zone is from a 100 meter radius to a 150 meter radius. Equipment located
beyond 150 meters will show up on the outer edge of the green zone.
The client software receives vehicle location data from the server software at and uses the
processed GPS NMEA data collected by the client software to continuously update the
display for the operator. The local GPS NMEA sentence information is read into arrays
to be processed into a string that can be sent to the server software so that the server can
send the data to other clients. Table 3.3 shows the GPS NMEA sentences before they are
processed to be used by the client software and sent to the server.
32 |
Virginia Tech | The Direction and Speed data will be used for calculations by the GPS Tracking Client
software and does not need to be sent to the GPS Tracking Server software.
To engage the locking mechanism for the transmission, IF/THEN statements are used to
check for the proper conditions. The speed is checked to see if it is less that 1 mph. The
speed check is not set to zero due to the GPS receiver drift, which results in a non-zero,
but low velocity. The GPS receiver will register a velocity of less than 1 mph, even when
it is stationary. If this condition is met, it then will check to see if the distance between
the local equipment and another vehicle is less than a specified amount. For testing
purposes, 25 meters are used. The last condition is to check for vehicle types. Larger
equipment are assigned a different class than smaller equipment. This prevents two
larger pieces of equipment from locking each other and producing a stalemate. If the
three conditions are met, then the ActivateSystem.vb module will initiate the locking
mechanism. The ActivateSystem.vb module will monitor the distances until the other
vehicle has moved a safe distance away. The module is also monitoring the other vehicle
distances and will keep the lock engaged if another vehicle approaches the danger zone of
the equipment with the locked transmission.
The data cycle of sending information to the server GPS Tracking Server, receiving data
from the GPS Tracking Server software, and processing the data is continued until the
user stops either the client software or the server software. This software allows for
continuous monitoring of vehicles that are part of the proximity warning system software.
For the GPS Tracking Client Software to function properly, it must be connected to the
GPS Tracking Server, and a GPS receiver must be connected to the computer. If the GPS
is not receiving a valid signal, the program will operate, but will not be able to calculate
the correct location of the GPS receiver until the receiver receives valid data from the
GPS satellites.
34 |
Virginia Tech | CHAPTER 4 – GIS ANALYSIS FOR SAFETY AND OPTIMIZATION
GIS software is used to analyze the data from the vehicles that have GPS installed on
them to optimize the operation site to improve production and safety. The data collected
may be analyzed to determine any potential high risk run over areas where vehicles were
in the danger zones of other equipment by looking at all the areas in which potential run
over events could have occurred. To determine these high risk areas, the data must be
loaded into ESRI ArcMap and the locations where the distances between equipment were
within the proximity zone must be found. These tasks are all be done within ESRI’s
ArcMap software and will result in the locations shown on a map of the operation area.
4.1 IMPORTING CSV DATA INTO GIS
To analyze the data, first the CSV file of the vehicle data be opened in ArcMap. The
CSV File, shown in Figure 4.1, contains the values for the ID of the vehicle, the Date, the
GPS time, the latitude, the longitude, and the altitude in meters of the equipment
monitored.
35 |
Virginia Tech | Figure 4.24: Display of Query Data in ArcMap
This data is exported on its own to separate it from the other data if desired. Queries are
a quick method of data reduction within GIS. This example contains only two minutes of
simulated operation time for two pieces of equipment. Had this been actual logged data,
each vehicle would have thousands of data points. Reducing the data is necessary to
properly view only the information that is required. In this case, only the data points
where the distances between two points are less than 25 meters are of interest. These
points in Figure 4.24 represent the areas where there is a high risk for a potential run over
incident. The areas where there is a high risk can be evaluated to determine if there is a
possibility to reduce the potential for a run over. Using the distance calculations, the
areas that have frequent occurrences are analyzed to see if any changes need to be made
56 |
Virginia Tech | CHAPTER 5 – TRANSMISSION LOCKING MECHANISM
5.1 TRANSMISSON LOCKING MECHANISM OPERATION
The purpose of the locking mechanism is to prevent larger vehicles from moving when
they are at the In-rest condition. This mechanism will take the decision to move the
equipment away from the operator when the conditions are not safe for the vehicle to
move. The locking mechanism will only activate under the proper conditions, so as to
prevent run over incidents when necessary while interference with the daily operation of
the equipment is kept minimal.
The first condition requires the vehicle that will have its transmission locked to have a
velocity near zero. This safeguard will prevent the system from activating on manual
transmissions when the operator is shifting gears. If the locking mechanism was
activated while the equipment is moving, the results could be disastrous. The second
condition requires the vehicle that will have its transmission locked to have its
transmission in neutral. Applying the locking mechanism while the transmission is in
gear will lock the transmission in gear, which could still allow it to move. The
transmission will only lock if the transmission is in the neutral or park position. The third
condition requires the other vehicle that triggers the locking mechanism to be within the
predefined proximity zone of the larger equipment. This means that another vehicle is at
risk for a run over incident.
A vehicle that has an active locking mechanism will not activate the locking mechanism
of other vehicles when they enter each others’ predetermined proximity zones. This rule
will prevent two vehicles from parking next to each other that both have locking
mechanisms and subsequently locking the other vehicles transmission and causing a
stalemate of the two vehicles.
The three conditions that must be met to lock the transmission will be controlled by two
sources, the software and the locking mechanism. The software, which is discussed in
62 |
Virginia Tech | the next chapter, will monitor for the velocity and the distance conditions. The status of
the transmission will be checked by the locking mechanism. The signal to lock the
transmission will be sent to the locking mechanism when the first two locking conditions
are met. The locking mechanism will detect whether or not the transmission is in neutral
or park. If the transmission is in the proper state, i.e. in neutral or park, the locking
mechanism will lock the gear select lever. If the transmission is not in the proper state,
the locking mechanism will not lock the gear select lever.
For maintenance purposes, an override option will be available to disable the system
when a situation arises. This override may be as simple as shutting down the onboard
computer system or using a switch located on the outside of the equipment that controls
the locking mechanism. This control will not be made easy for the operator to turn off.
Bypassing the system will only be performed when necessary as to increase the reliability
of the system.
The locking conditions were selected based upon certain factors that are involved in most
run over incidents that occur from the In-Rest condition. The In-Rest condition is
generally when the equipment has been parked and is no longer in gear. Preventing a run
over from occurring could be done while the equipment is still in the In-Rest condition.
With the majority of run over incidents occurring from the In-Rest condition, stopping a
run over before the equipment begins to move is the best way to prevent it.
The approach taken to design the locking mechanism is to prohibit large equipment from
moving if it is not in gear and not moving when the proximity warning system detects a
vehicle or person in the proximity zone of the larger equipment. While the locking
mechanism is locked, the system initiates a series of audio and visual warning signals to
inform all the operators involved. If the locking mechanism can not be engaged, due to
one of the conditions not being met, the operators will still be warned that there is a
potential for a run over.
63 |
Virginia Tech | 5.2 TRANSMISSION LOCKING MECHANISM DESIGN
The locking mechanism is a fairly simple design depending on the equipment type and
the transmission. There are numerous methods that could be used to lock a transmission
depending on the mobile equipment used. The locking mechanism could be an electronic
or mechanical block on the transmission mechanism that is only activated if the
equipment is at rest and in neutral.
In order to correctly design a locking mechanism for large mobile equipment, the
equipment transmission must be analyzed to determine the best locking approach and
design. There are two possible locking solutions based on the characteristics of the
transmission in consideration. For example, if contemplating a manual transmission, a
mechanical lock is the only reasonable solution, if considering an automatic transmission
an electronic lock alternative may be considered if the transmission is controlled
electronically, otherwise a mechanical lock should be considered.
The locking mechanism design for manual transmissions is a basic fork like brace that is
placed around the shift lever to prevent it from moving into gear. The fork like brace is
put into place by a pneumatic mechanism or electronic mechanism. The locking
mechanism is installed in the transmission gear lever enclosed by the transmission
housing. For the locking mechanism to be successfully engaged, the gear select lever
must not be in a gear so that the fork can prevent it from moving into a gear. Figure 5.1
shows both an unlocked and a locked transmission for a manual gear transmission.
64 |
Virginia Tech | Figure 5.2: An unlocked automatic transmission (left) and a locked automatic
transmission (right). (Howstuffworks.com 2004)
The mounting and the location of installment will be based on the transmission and
equipment design. It may be possible to use the same mechanical device on both
automatic and manual transmissions. The majority of larger equipment will likely have
an automatic transmission; therefore it may not be necessary to develop a locking
mechanism for a manual transmission.
Many of the new automatic transmissions designed and built today for large mobile
equipment are electronically controlled. Depending on the transmissions design,
electronic or mechanical, it is feasible to use an electronic locking device to prevent the
equipment transmission from being put into gear. An electronically controlled switch
could be added to the control system of the transmission, which is activated under the
same conditions described earlier. It will prevent the signal from the gear selection
device from telling the transmission to switch gears.
With the variation in equipment design, it will be difficult to design a locking mechanism
that will fit with all equipment easily. The need for a locking mechanism on certain large
equipment will dictate which equipment designs to apply a locking mechanism to.
66 |
Virginia Tech | CHAPTER 6 – CONCLUSIONS AND FUTURE WORK
6.1 CONCLUSIONS
The majority of run over accidents occurs from the In-rest state and is mainly caused by
lack of communication between operators. As the operators are unaware of the intentions
and actions of other operators, the operator of the larger equipment will move the
equipment before the smaller equipment is clear. An In-rest GPS proximity warning
system with transmission lock will prevent the larger equipment from moving before the
area is clear. It will also provide operators of the large equipment with the location of
smaller equipment that may not be visible to them.
To optimize a system that utilizes a transmission lock, several studies will need to be
conducted. Tests must be performed to determine the optimal proximity zone or bubble
for the equipment involved in the system. The proximity zone will be based on the
equipment size and type. It will also be possible to dynamically adjust the proximity
zone for certain conditions, including the current weather, the location of the vehicles,
and the time of the day. For example, when visibility is lower, the proximity zone should
be increased to compensate based on historical information analyzed by GIS software.
The software developed for this project has shown the potential to develop a full working
system that would incorporate changes in condition real time to adjust the proximity
zone. The client software design allows for easy modification to add features or to obtain
other data to return to the server software. The server software will need to be altered to
accept a larger number of vehicles since for the server simulation; the clients were limited
to only two in order to reduce the amount of simulated data needed.
While the software and hardware for this effort are currently stable, the possibility of
system failure must be analyzed. A failure of the lockout system does not necessarily
mean that the truck operator could run over a pickup or a person. The chances that
system failure and the potential for a run over incident would occur at the same time are
small. There are three main scenario failures associated with this system. The first
68 |
Virginia Tech | scenario occurs when the system activates and locks the transmission when it should not,
resulting in a positive-proximity signal. The result of this failure is a loss in production
time. In an industry where production time is money, a failure of this nature could cost a
company thousands of dollars with each failure. The second scenario occurs when the
system does not activate when it should to prevent a run over incident, i.e. a vehicle is
within the proximity zone of the haul truck and no positive signal for proximity is
registered resulting in a run over incident. The results of a failure such as this are
potentially tragic and may cost a company up to one million dollars. The third scenario
of failure would result if the nearby vehicle is not equipped with the system, meaning that
the equipment is not able to detect the nearby vehicle. To prevent this failure, all
equipment will need to be integrated into the system, including visiting vehicles.
The accuracy of the system is based upon the GPS accuracy. If DGPS or RTK GPS units
are used, sub meter accuracy is possible. The consistency of obtaining usable GPS
signals will affect the accuracy of the system. If a GPS position is not obtainable due to
obstructions, then the system will not function properly. The ability to obtain a GPS
signal will depend on the GPS satellite configuration and obstructions, such as buildings
and high walls, which will change with each site that this system is installed in. When
advancements are made in GPS accuracy and signal usability, the advancements may be
implemented into this system with little change to the overall system by simply
upgrading the GPS units used.
Depending on the needs of the operation, a GIS based approach can increase the data
obtained while increasing the ability to manipulate more data. GIS is a time and cost
effective method when utilizing large data sets, especially spatial data. GIS software can
provide the tools necessary to optimize equipment production while increasing worker
safety.
The primary focus of this work was to determine the possibility of utilizing a GPS based
Proximity Warning System with a transmission locking mechanism for large mobile
equipment and the benefits from using a system such as this. The success of a GPS
69 |
Virginia Tech | proximity warning system with a transmission locking mechanism for in-rest vehicles
depends on several factors. First, the cost of the system must be low, as well as operating
costs. Without a low cost solution, there is no great desire to implement a system such as
this. The main costs incurred are the initial startup and installation fees, capital cost for
the equipment and software, and operating costs on the system, with the capital costs
being the highest cost incurred. Second, the system must not interfere with the
equipment operation except in cases where there is risk for a potential run over that could
occur at the in-rest condition. Any unneeded interference in the daily operation of
equipment can result in slowed operation or lost production. The third factor is the
information obtained from using a GPS based system. In addition to providing proximity
warning system to the mobile equipment operation, production and equipment tracking is
possible from a central site. This provides fast and reliable data for production reports on
every piece of mobile equipment in use at an operation.
With the ever increasing costs associated with run-over incidents, fatal and non-fatal
alike, the overall savings and benefits from using a GPS proximity system and
transmission locking mechanism could easily make this system a worthwhile investment
for all large mobile equipment operations. The information provided by using a system
such as this is also valuable and can aid in optimizing operations or processes at a site
where large mobile equipment is used. When both of these are considered, the benefits
for using a GPS based proximity warning system will easily outweigh the associated
costs.
6.2 FUTURE WORK
In addition to utilizing a transmission locking mechanism, it is also be possible to
implement additional safety systems. A device that could retard the throttle of the larger
equipment by interrupting the flow of fuel to the engine could be used to prevent the
equipment from moving. Another device that could be used is an ignition kill switch,
which would shut the engine down and keep the equipment stationary. These types of
mechanisms may not be desirable as most large equipment is left running continuously;
70 |
Virginia Tech | however, they may be the only device that will work on certain equipment types and
sizes.
Visiting vehicles can be integrated into the system by utilizing a device that contains a
GPS receiver, a wireless networking device, and a small onboard computer that will
handle the data that will be sent to the server. In this case, the visiting operator’s device
will not receive the data and the positions of the equipment from the server. This setup is
only for small vehicles that have the potential for being run over, as they have very little
potential for running over other equipment. Visiting vehicles will be easy to
accommodate into the system as they only require placing a GPS receiver and wireless
network device on the roof of the vehicle using a magnet or suction device and providing
power to the device though a standard cigarette power adapter, found in most vehicles
today.
ESRI has recently released a software package that allows for real-time tracking of GPS
based equipment, called Tracking Server. Tracking server can be customized to perform
the same operations as the Server Software created for this project. The benefits to
utilizing Tracking Server would be that the data would be stored in a GIS spatial format
and would not have to be converted from a CSV file. Information can be easily accessed
and manipulated in a GIS format. The vehicles could also be tracked in real-time from
the server, allowing for an overview of the entire operation from a central site. The
ability to optimize an operation would take less time with an integrated GIS based data
server and storage program.
71 |
Virginia Tech | STEPHEN J. MILLER
Education:
Virginia Tech - Blacksburg, VA
M.S. Mining and Minerals Engineering (December 2005)
B.S. Mining and Minerals Engineering (May 2003)
B.S. Geophysics (May 2003)
Previous Employment:
Graduate Research Assistant, Blacksburg, VA, August 2003 – May 2005
• Conduct Research involving GPS and GIS projects.
Teachers Assistant, Blacksburg, VA, January 2002 - May 2003
• Aided in the design and testing of a multi-channel borehole geophone array
for the Department of Energy.
• Aided in developing software and equipment for Computer Aided
Tomography of objects.
Engineering Assistant, Marshall Miller & Associates, Bluefield, VA, May 2001 –
August 2001
• Assist senior engineers and geologists with:
Design of surface and underground mines, surface water runoff calculations,
permitting process for the expansion of a surface quarry near a residential
area, calculations for numerous mine reclamations.
Geological Surveyor, Massey Performance Coal, Naoma, WV, December 2000 –
January 2001
• Assisted in the underground surveying of a coal seam to assist in the computer
visualization to optimize the longwall mining operation.
Activities:
President of Burkhart Mining Society, Virginia Tech Student Chapter of SME
2002-2003
Member of the Geology Club at Virginia Tech 2002-2003
Treasurer of Burkhart Mining Society 2001-2002
Student Member of SME 1999 – Present
Member of Burkhart Mining Society 2000-2005
Member of Mining Competition Team 2001 and 2002
140 |
Virginia Tech | IN-PLANT TESTING OF THE HYDROFLOAT
SEPARATOR FOR COARSE PHOSPHATE RECOVERY
by
Christopher J. Barbee
Committee Chairman: Gerald H. Luttrell
Department of Mining and Minerals Engineering
ABSTRACT
The HydroFloat technology was specifically developed to upgrade phosphate
sands that are too coarse to be efficiently recovered by conventional flotation methods. In
this novel process, classified feed is suspended in a fluidized-bed and then aerated. The
reagentized phosphate particles become buoyant and report to the product launder after
encountering and attaching to the rising air bubbles. Simultaneously, the hydrophilic
particles are rejected as a high solids content (65-70%) underflow. The fluidized bed acts
as a “resistant” layer through which no bubble/particle aggregates can penetrate. As a
result, the HydroFloat also acts as a density separator that is capable of treating much
coarser particles as compared to traditional flotation processes. In addition, the high
solids content of the teeter bed promotes bubble-particle attachment and reduces the cell
volume required to achieve a given capacity. To fully evaluate the potential advantages of
the HydroFloat technology, a 5-tph test circuit was installed and evaluated in an industrial
phosphate beneficiation plant. Feed to the test circuit was continuously classified,
conditioned and upgraded using the HydroFloat technology. The test results indicated
that the HydroFloat could produce a high-grade phosphate product in a single stage of
separation. Product quality ranged between 70-72% BPL (bone phosphate of lime =
2.185 x %P O ) and 5-10% insols (acid insoluble solids). BPL recoveries exceeded 98%
2 5
at feed rates greater than 2.0 tph per ft2 of separator cross-sectional area. These results
were superior to traditional column flotation, which recovered less than 90% of the
valuable product at a capacity of less than 1 tph per ft2. |
Virginia Tech | ACKNOWLEDGEMENTS
I would like to sincerely thank everyone who I have been associated with here at
Virginia Tech, especially those who have assisted in my graduate study the past several
years. First, I want to express the utmost thanks to Dr. G. H. Luttrell for his service as my
committee chairman and advisor. His guidance and instruction has been invaluable in my
work towards my thesis and will be of use to me for many years to come. I would also
like to recognize E. C. Westman and Dr. G. T. Adel for their involvement and advice, for
which I will always be thankful. The final faculty member I would like to thank is Dr. R.
H. Yoon, for his assistance in completing my work.
Secondly, I would like to express my sincere appreciation to Wayne Slusser and
Billy Slusser for all their assistance in fabrication. Their skills and input into the creative
process was something I could not have done without. Also I would like to thank Shane
Bomar, Kerem Eyradin and Ian Sherrel for their much appreciated help in all my work. I
would also like to thank Matt Eisenmann for his initial work on the project and his
continued support.
Finally, I would like to sincerely thank Mike Mankosa and Jaisen Kohmuench of
the Eriez Magnetics for their support and guidance in all my work. My fieldwork could
not have been possible without the wonderful hospitality expressed to me by Joe
Shoniker of the PCS Phosphate Company and I want to thank him for his assistance. In
conclusion, I would like to express my appreciation to the Florida Institute of Phosphate
Research, whose funding made all this possible.
ii i |
Virginia Tech | EXECUTIVE SUMMARY
The Eriez HydroFloat technology was specifically developed to upgrade
phosphate sands that are too coarse to be efficiently recovered by existing flotation
methods. In this novel process, classified feed is suspended in a fluidized-bed and aerated
using an external sparging system. Air bubbles selectively attach to particles that have
been made hydrophobic through the addition of a flotation collector. The teetering effect
of the fluidized-bed forces the low-density bubble-particle aggregates into the overflow,
while hydrophilic particles are rejected as a high solids content underflow. Since the
HydroFloat is essentially a density separator, the process can treat much coarser particles
than traditional flotation systems. In addition, the high solids content of the teeter-bed
promotes bubble-particle attachment and reduces the cell volume required to achieve a
given capacity.
Initial laboratory- and pilot-scale test data indicated that the HydroFloat cell is
capable of achieving superior recoveries of BPL (bone phosphate of lime) as compared to
traditional mechanical and column flotation cells. This was particularly evident with
particle sizes greater than 35 mesh. Recovery of the coarse, high-grade particles led to
greatly improved product quality. These coarse phosphate particles are often lost when
using traditional flotation processes due to detachment and buoyancy limitations.
In any flotation process, recovery is improved when particle retention time is
lengthened, mixing is reduced, and the probability of bubble-particle collision is
increased. The HydroFloat cell has the advantage of simultaneously improving each of
these factors. The counter-current flow of particles settling in a hindered state against an
upward rising current of water increases particle retention time. The presence of the teeter
1 |
Virginia Tech | bed reduces turbulence (i.e., mixing) and increases the plug-flow characteristics of the
separator. The high solids content of the teeter-bed greatly also increases the probability
of bubble-particle contacting. In addition, the HydroFloat utilizes less energy per ton of
feed since no mechanical agitator is required. The increase in unit capacity also results in
reduced capital and installation costs.
To demonstrate the benefits of the HydroFloat separator, a pilot-scale HydroFloat
circuit was installed and evaluated at an industrial phosphate plant. The primary objective
of the pilot-scale test program was to quantify the effects of key design and operating
parameters on the performance of the HydroFloat separator. Tests were also conducted to
evaluate the effectiveness of an alternative rotary drum system for conditioning the
coarse feed stream.
The pilot-scale test circuit was installed at PCS Phosphate (Swift Creek Plant,
White Springs, Florida). The circuit was designed to handle a dry solids feed rate of 4-6
tph and included all unit operations for classification, conditioning, and flotation.
Classification was carried out using an Eriez CrossFlow hindered-bed separator. Feed
preparation was accomplished using either a four-cell bank of stirred-tanks or a rotating
drum conditioner.
The test data obtained during the course of this project showed that the rotary
conditioner performed significantly better than the stirred tank conditioner. In fact, the
overall BPL recovery increased more than 20% when using the rotary conditioner. The
poorer separation results obtained with the stirred-tank conditioner are attributed to the
creation of excess fines. The high-energy input per unit volume that was required to
maintain the coarse sand in suspension resulted in unwanted attrition of the phosphate
2 |
Virginia Tech | INTRODUCTION
BACKGROUND
Hindered-bed separators are commonly used in the minerals industry as gravity
concentration devices. These units can be used for mineral concentration if the particle
size range and density difference between mineral types are within acceptable limits.
However, these separators often suffer from misplacement of low-density, coarse
particles to the high-density underflow. This shortcoming is due to the accumulation of
coarse, low-density particles at the top of the teeter-bed. These particles are too light to
penetrate the teeter-bed, but are too heavy to be carried by the rising water into the
overflow launder. Ultimately, these particles are forced to the underflow by mass action
as more particles accumulate at the top of the teeter-bed. This inherent inefficiency can
be partially corrected by increasing the teeter-water velocity to convey the coarse, low-
density solids to the overflow. Unfortunately, the higher water rates will cause fine, high-
density solids to be misplaced to the overflow, thereby reducing the separation efficiency.
To overcome the shortcomings of traditional hindered-bed separators, a novel
device known as the HydroFloat was developed. As shown in Figure 1, the HydroFloat
consists of a tank subdivided into an upper separation chamber and a lower dewatering
cone. The device operates much like a traditional hindered-bed separator with the feed
settling against an upward current of fluidization water. The fluidization (teeter) water is
supplied through a network of pipes that extend across the bottom of the separator.
However, in the case of the HydroFloat separator, the teeter-bed is continuously aerated
by injecting compressed air and a small amount of frothing agent into the fluidization
water. The air bubbles become attached to the hydrophobic particles within the teeter-
4 |
Virginia Tech | separator. The valve is actuated in response to a control signal provided by a pressure
transducer mounted on the side of the separation chamber. This configuration allows a
constant effective density to be maintained within the teeter-bed.
The HydroFloat can be theoretically applied to any system where differences in
apparent density can be created by the selective attachment of air bubbles. Although not a
requirement, the preferred mode of operation would be to make the low-density
component hydrophobic so that the greatest difference in specific gravity is achieved.
Compared to traditional flotation processes, the HydroFloat offers important advantages
for treating coarser material including enhanced bubble-particle contacting, increased
residence time, lower axial mixing/cell turbulence, and reduced air consumption.
LITERATURE REVIEW
The improved recovery of coarse particles has long been a goal in the minerals
processing industry. Several studies have been conducted in an effort to overcome the
inefficiencies associated with existing processes and equipment. The studies range in
scope from fundamental investigations of bubble-particle interactions to the development
of novel equipment. Advancements in chemistry and conditioning practices have also
been employed at a number of industrial installations.
Froth Flotation Technology
Research on the relationship between particle size and floatability began as early
as 1931 with work conducted by Gaudin, et al. (1931) showing that coarse and extremely
fine particles are more difficult to recover as compared to intermediate size particles.
Twenty years after this original work, Morris (1952) arrived at the same conclusion that
6 |
Virginia Tech | particle size is one of the most important factors in the recovery of ores by flotation.
Generally, recovery is low for the finest particles (d <10 μm) and is at a maximum for
p
intermediate size particles. A sharp decrease in recovery occurs as the particle diameter
continues to increase. This reduction in recovery on the fine and coarse ends is indicative
of a reduction in the flotation rate of the particles (Jameson et al., 1977). It can be seen
that the efficiency of the froth flotation process deteriorates rapidly when operating in the
extremely fine or coarse particle size ranges, i.e., below 10 μm and above 250 μm. These
findings suggest that current conventional flotation practices are optimal only for the
recovery of particles in the size range of about 65 to 100 mesh.
According to Soto and Barbery (1991), conventional flotation cells operate with
two contradictory goals. A conventional cell has to provide enough agitation to maintain
particles in suspension, shear and disperse air bubbles, and promote bubble-particle
collision. However, for optimal recovery, a quiescent system is required to reduce
detachment and minimize entrainment. As a result, coarse particle flotation is more
difficult since increased agitation is required to maintain particles in suspension.
Furthermore, coarse particles are more likely to detach under turbulent conditions. To
compensate for the lack of recovery, some installations are using relatively small
flotation devices operated at low feed rates (Lawver et al., 1984).
The stability of bubble-particle aggregates was also examined in theoretical and
experimental studies conducted by Schulze (1977). This work showed that the upper
particle size limit for flotation is dictated by the resultant of forces acting on a bubble and
particle aggregate. These forces include gravity, buoyancy, hydrostatic pressure, capillary
compression, tension, and shear forces induced by the system. According to Schulze,
7 |
Virginia Tech | particles with a diameter of several millimeters should float (in the absence of turbulence)
provided the contact angle is greater than 50°. Later work by Schulze (1984) shows that
turbulent conditions, similar to those found in mechanical flotation cells, drastically
reduce the upper size limit of floatable material. Several other investigations support
these findings (Bensley and Nicol, 1985; Soto, 1988). In fact, it has been demonstrated
that turbulent conditions can reduce the maximum floatable size to one tenth of that
found in non-turbulent conditions (Ives, 1984; Ahmed and Jameson, 1989).
Another theory is that small particles have a higher rate of flotation and, therefore,
crowd out coarse particles from the surfaces of the air bubbles. Soto and Barbery (1991)
disagree with this assessment, speculating that the poor recovery of coarse material is
strictly a result of detachment. They further advocate the use of separate circuits for fine
and coarse processing in an effort to optimize the conditions necessary for increased
recovery.
Several new devices have been produced and tested for the sole purpose of
improving the recovery of coarse particles. For example, Harris, et al., (1992) tested a
hybrid mechanical flotation column, which is essentially a cross between a conventional
cell and a column flotation cell. In this device, a column is mounted above an impeller
agitator. The column component offers the advantage of an upper quiescent section
optimal for coarse particle flotation, while the mechanical impeller offers the opportunity
for reattachment and increased collection of any non-attached coarse material in the
lower zone. However, when compared to a release analysis curve, this hybrid mechanical
column out-performed a conventional flotation cell, but was equivalent to a traditional
flotation column.
8 |
Virginia Tech | Improvements in coarse particle recovery have also been seen with the advent of
non-mechanical flotation cells. For example, success in floating coarser particles has
been reported when using column flotation cells, Lang launders, skin flotation systems,
and the negative-bias flotation columns. Column flotation offers several advantages that
can be useful in any application. Barbery (1984) advocates that columns have no
mechanical parts, are easy to automate and control, and provide a high capacity. In
addition, columns are low turbulence machines that have well-defined hydrodynamic
conditions. These advantages translate to ease of maintenance, scale-up, modeling, and a
reduction of short-circuiting usually observed in conventional flotation machines.
Phosphate Flotation Technology
The United States is the world’s largest producer of phosphate rock. In 1999, this
industry accounted for approximately 45 million tons of marketable product valued at
more than $1.1 billion annually (United States Geological Survey, Mineral Commodity
Summaries, January 1999). Approximately 83% of this production can be attributed to
mines located in Florida and North Carolina. In subsequent reports it is stated that “U.S.
phosphate rock production and use dropped to 40 year lows in 2006.” This contracting
market requires ever more efficient operations to remain competitive.
Prior to marketing, the run-of-mine phosphate matrix must be upgraded to
separate the valuable phosphate grains from other impurities. The first stage of
processing involves screening to recover a coarse (plus 14 mesh) high-grade pebble
product. The screen underflow is subsequently deslimed at 150 mesh to remove fine
clays. Although 20-30% of the phosphate contained in the matrix is present in the fine
fraction, technologies currently do not exist that permit this material to be recovered in a
9 |
Virginia Tech | cost-effective manner. The remaining 14 x 150 mesh fraction is classified into coarse
(e.g., 14 x 35 mesh) and fine (e.g., 35 x 150 mesh) fractions that are upgraded using
conventional flotation machines, column flotation cells, or other novel techniques such as
belt flotation (Moudgil and Gupta, 1989). The fine fraction (35 x 150 mesh) generally
responds well to froth flotation. In most cases, conventional (mechanical) flotation cells
can be used to produce acceptable concentrate grades with recoveries in excess of 90%.
On the other hand, high recoveries are often difficult to maintain for the coarser (14 x 35
mesh) fraction.
Prior work has shown that the recovery of coarse particles (e.g., larger than 30
mesh) can be less than 50% in many industrial operations (Davis and Hood, 1992). For
example, Figure 2 illustrates the sharp reduction in recovery as particle size increases
from 0.1 mm (150 mesh) to 1 mm (16 mesh) for a Florida phosphate operation. In many
cases, attempts by plant operators to improve coarse particle recovery often produce an
undesirable side effect of diminishing flotation selectivity.
The findings presented in Figure 2 are consistent with historical data from other
flotation applications, which show coarse particles are more difficult to recover using
traditional flotation machines. Current research indicates that coarser material is lost due
to unfavorable hydrodynamic conditions and/or competition with the fines for the
available bubble surface area. For this reason, split-feed circuit arrangements are often
recommended when treating a wide feed particle size distribution. In addition, new and/or
improved technologies need to be developed that are more efficient in treating coarser
feeds.
10 |
Virginia Tech | One well-known method of improving flotation performance is to classify the
feed into narrow size fractions and to float each size class separately. This technique,
which is commonly referred to as split-feed flotation, has several potential advantages.
These advantages include higher throughput capacity, lower reagent requirements, and
improved separation efficiency. Split-feed flotation has been successfully applied to a
wide variety of flotation systems including coal, phosphate, potash, and industrial
minerals (Soto and Barbery, 1991).
The United States Bureau of Mines (USBM) conducted one of the most
comprehensive studies of the coarse particle recovery problem in the phosphate industry
(Davis and Hood, 1993). This investigation involved the sampling of seven Florida
phosphate operations to identify sources of phosphate losses that occur during
beneficiation. According to this field survey, approximately 50 million tons of flotation
tailings are discarded each year in the phosphate industry. Although the tailings contain
only 4% of the matrix phosphate, more than half of the potentially recoverable phosphate
in the tailings is concentrated in the plus 28 mesh fraction. In all seven plants, the coarse
fraction was higher in grade than overall feed to the flotation circuits. In some cases, the
grade of the plus 28 mesh fraction in the tailings approached 57% BPL. The USBM study
indicated that the flotation recovery of the plus 35 mesh fraction averaged only 60% for
the seven sites included in the survey. Furthermore, the study concluded that of the seven
phosphate operations, none have been successful in efficiently recovering the coarse
phosphate particles.
There have been several attempts to improve the poor recovery of coarse (16 x 35
mesh) phosphate grains using improved flotation reagents. The University of Florida,
12 |
Virginia Tech | under the sponsorship of the Florida Institute of Phosphate Research (FIPR Project 02-
067-099), completed one such investigation in early 1992. This study showed that the
flotation of coarse phosphate is very difficult and recoveries of only 60% or less are
normally achievable. The goal of the FIPR study was to determine whether the recovery
of coarse phosphate particles could be enhanced via collector emulsification and froth
modification achieved by frothers and fines addition. Plant tests conducted as part of this
project showed that the appropriate selection of reagents could improve the recovery of
coarse phosphate (16 x 35 mesh) by up to 6 percentage points. Furthermore, plant tests
conducted with emulsified collector provided recovery gains as large as 10 percent in
select cases. Unfortunately, reports of follow-up work by industry that support these
findings are not available.
In 1988, FIPR also provided financial support (FIPR Project 02-070-098) to Laval
University to determine the mechanisms involved in coarse particle flotation and to
explain the low recoveries of such particles when treated by conventional froth flotation.
In light of this study, these investigators proposed the development of a modified low
turbulence device for the flotation of coarse phosphate particles. Laboratory tests
indicated that this approach was capable of achieving recoveries of greater than 99% for
coarse phosphate feeds. In addition, the investigators noted that this approach did not
suffer from high reagent costs associated with other strategies designed to overcome the
coarse particle recovery problem. Although the preliminary data was extremely
promising, this work was never carried through to industrial plant trials due to problems
with the sparging and tailings discharge systems.
13 |
Virginia Tech | Building on these early findings, Soto and Barbery (1991) developed a negative
bias flotation column that improved coarse particle recovery. It was surmised that the
only factors preventing conventional columns from being ideally suited for coarse
particle recovery were wash water flow and a thick froth layer. Wash water is used in
column flotation to “wash” fine gangue (i.e., clays) from the product froth. However,
wash water also forced some of the coarser particles back into the pulp resulting in a
reduction in recovery. Soto and Barbery removed the wash water, which resulted in a net
upward flow through the column (i.e., negative bias flow). In addition, they added an
upward flow of elutriation water to assist in the transport of coarse particles-bubble
aggregates into the overflow launder. As a result of these modifications, Barbery (1989)
was able to achieve a four-fold improvement in coarse particle recovery when utilizing
this negative bias column. Essentially, this device is operated in a flooded manner and in
the absence of a froth zone. Several similar devices have also been introduced that make
use of this same principle to improve coarse particle flotation (e.g., Laskowski, 1995).
Several other alternative processes have been used by industry in an attempt to
improve the recovery of the coarser particles. These techniques include gravity-based
devices such as heavy media cyclones, tables, and spirals, as well as belt conveyors that
have been modified to perform skin-flotation (Moudgil and Barnett, 1979). Although
some of these units have been successfully used in industry, they normally must be
supplemented with scavenging flotation cells to maintain acceptable levels of
performance (Moudgil and Barnett, 1979; Lawver et al., 1984). Furthermore, these units
typically require excessive maintenance, have low throughput capacities, and suffer from
high operating costs.
14 |
Virginia Tech | PROJECT OBJECTIVES
One of the most obvious advantages of improved coarse particle recovery is the
increased production of phosphate rock from reserves currently being mined. For
example, a survey of one Florida plant indicated that 7-15% of the plant feed was present
in the plus 35 mesh fraction. At a 2,000 tph feed rate, this fraction represents 140-300 tph
of flotation feed. An improvement in coarse particle recovery from 60% to 90% would
represent an additional 50-100 tph of phosphate concentrate. This tonnage corresponds to
an additional $7.5-15 million of revenues. This incremental tonnage and income could be
produced without additional mining or reserve depletion. Past attempts to improve the
recovery of coarse phosphate particles have been unsuccessful for technical or cost
reasons. In addition, many of the proposed solutions could not be transferred to a plant
scale operation. As a result, it is apparent that a new low-cost technology is needed to
improve the recovery of coarse phosphate particles (>35 mesh).
The objective of this study is to conduct an in-plant pilot-scale evaluation of a
new separator known as the HydroFloat concentrator. This technology is specifically
designed to improve the recovery of coarse phosphate particles that are currently lost in
industrial processing plants. The study includes (i) a technical evaluation that examines
the capabilities of the new technology in terms of product recovery, quality and
throughput capacity and (ii) an economic analysis that examines the financial feasibility
of implementing the system in the Florida phosphate industry.
15 |
Virginia Tech | EXPERIMENTAL
WORK PLAN PREPARATION
A project work plan was prepared and submitted to FIPR and PCS Phosphate for
approval. This work plan provided plant personnel the opportunity to modify the
proposed work and to incorporate any ideas or new information that may have become
available between the project award date and the initiation of activities. The work plan
provided a description of the on-site testing strategy as well as experimental procedures,
analytical methods, and reporting guidelines for the proposed work. The original
schedule for the proposed work is presented in Figure 3. According to this chart, the work
was scheduled for completion in 12 months. However, a downturn of the phosphate
Work Work Element Duration (Month)
Element Description 1 2 3 4 5 6 7 8 9 10 11 12
Task 1 Work Plan Preparation
Task 2 HydroFloat Testing
Subtask 2.1 Equipment Setup
Subtask 2.2 Shakedown Testing
Subtask 2.3 Detailed Testing
Subtask 2.4 Comparison Testing
Task 3 Conditioner Testing
Subtask 3.1 Equipment Setup
Subtask 3.2 Shakedown Testing
Subtask 3.3 Detailed Testing
Task 4 Long-Duration Testing
Task 5 Process Evaluation
Subtask 5.1 Technical Evaluation
Subtask 5.2 Modeling/Simulation
Task 6 Sample Analysis
Task 7 Final Report Preparation
Figure 3. Project Tasks and Schedule.
16 |
Virginia Tech | industry resulted in on-site manpower reduction. As a result, the industrial participants
extended the length of the project to 18 months to accommodate changes in staffing
levels and production schedules. This extension was also used to accommodate additional
pilot-scale testing of a novel flotation reagent in conjunction with the University of Utah.
HYDROFLOAT TESTING
Equipment Setup
A schematic of the pilot-scale test circuit used to evaluate the performance of the
HydroFloat separator is shown in Figure 4. The test circuit consisted of three primary unit
operations, i.e., pilot-scale classifier, slurry conditioner, and HydroFloat separator. In this
circuit, the coarse underflow from an existing bank of classifying cyclones was fed to a 5
ft x 5 ft Eriez CrossFlow classifier (see Figure 5). The preliminary tests showed that the
classifier was capable of handling solid flows in excess of 150 ton/hr (6 ton/hr/ft2) despite
CCLLAASSSSIIFFIICCAATTIIOONN CCOONNDDIITTIIOONNIINNGG SSEEPPAARRAATTIIOONN
HHyyddrrooFFllooaatt
SSeeppaarraattoorr
CCIIRRCCUUIITT
FFEEEEDD
CCrroossssFFllooww
CCllaassssiiffiieerr
SSttiirrrreedd TTaannkk
CCoonnddiittiioonneerr
PPHHOOSSPPHHAATTEE
CCOONNCCEENNTTRRAATTEE
FFIINNEE CCOOAARRSSEE RRoottaarryy DDrruumm
OOVVEERRFFLLOOWW UUNNDDEERRFFLLOOWW CCoonnddiittiioonneerr ((AAlltteerrnnaattee)) WWAASSTTEE
((PPlluuss 00..66 mmmm)) PPRROODDUUCCTT
((MMiinnuuss 00..66 mmmm))
Figure 4. Pilot-Scale Test Circuit Used to Evaluate the HydroFloat Separator.
17 |
Virginia Tech | conditioning could be performed using
either a stirred-tank (four stage) or a single-
stage rotary drum (30-inch diameter)
conditioner. The conditioner circuit was
able to operate reliably at approximately 40-
75% solids at a maximum mass flow rate of
4-6 ton/hr (dry solids). This corresponds to
a range in retention time from 1-3 minutes.
The conditioned slurry flowed by
gravity to the feed inlet for either the
HydroFloat separator (see Figure 7) or a 20-
inch diameter flotation column (not shown). Figure 7. Photograph of the Pilot-
Scale HydroFloat Separator.
This arrangement made it possible to
directly compare the effectiveness of the HydroFloat separator with existing column
technology. The test circuit was installed with all necessary components (i.e., feeder,
conditioner, reagent pumps, etc.) required to operate the separator in continuous mode at
a maximum capacity of 4-6 tph.
ROTARY CONDITIONER TESTING
Equipment Setup
Laboratory test data indicate that a significant increase in BPL recovery can be
achieved by improving the conditioning of the coarse phosphate matrix. In particular, a
rotary drum conditioner has been shown to be capable of improving the separation
19 |
Virginia Tech | DATA RECONCILATION
To ensure that the test data are reliable and self-consistent, all test data was be
analyzed and adjusted using a mass balance program. For the testing of the HydroFloat,
samples of the feed, concentrate and tailings streams were collected for each test. A head
sample was taken from each stream and the remainder was screened into four different
size fractions (+16, 16x28, 28x35 and –35) and weight percentages were determined.
Chemical analysis (%BPL and %insoluble) of each of the five fractions was then
performed. The results from the chemical analysis of those streams were used to
determine performance characteristics such as BPL recovery, insoluble rejection, etc.
The mass balance was conducted based on the conservation of total mass and
phosphate throughout the circuit. This balance provides three independent linear
equations for steady-state operation:
i
∑%inclass =100 [1]
i
1
i
∑(PercentMass) ×(ComponentContent) −HeadComponent =0 [2]
i i
1
Feed =Concentrate+Tailings [3]
In many cases, the experimental data from the test circuits were over-defined.
This occurs when redundant streams are sampled or when multiple independent assays
(e.g., % BPL and %Insoluble) are available for each stream. Assays for different
components in each stream may result in different (but equally valid) estimates of the
concentrate mass yield (Y). The yield may be calculated using the well-known two-
product formula given by:
f −t
Y = [4]
c−t
21 |
Virginia Tech | where f, c and t are experimental assays for the feed, concentrate and tailing streams,
respectively. For example, Table 1 summarizes the mass yields calculated for the unit.
The yields calculated using the two assays are very close in some cases regardless of
whether it was based on % BPL or % insols. The yields determined for the +16 and
16x28 mesh material are in this group. On the other hand, the yields calculated using the
two different assays varied in some cases. The yields determined for the head sample,
28x35 and –35 mesh material fall into this category for this particular example. These
discrepancies are due to experimental errors associated with process fluctuations,
sampling techniques and laboratory analysis procedures.
One method of resolving this dilemma is to construct a “self-consistent” data set
which satisfies the mass balance criteria given by Equations [1] - [3]. This procedure
must be performed such that the minimum total adjustment is made to the measured data.
This can be achieved by minimizing the weighted sum-of-squares (WSSQ) given by:
c m (Ak* −Ak)2 m (M* −M )2
WSSQ = ∑∑ i i +∑ i i =0 [5]
(Sk)2 (S )2
k=1 i=1 i i=1 i
where Sk and S are the standard deviations of the measured assay values and measured
i i
flow rates, respectively. The superscript * is used to distinguish estimated values from
Table 1. Comparison of Yield Calculations.
Size Class BPL Yield Insol Yield
Head 22% 26%
+16 22% 22%
16x28 18% 18%
28x35 27% 28%
-35 68% 70%
22 |
Virginia Tech | RESULTS AND DISCUSSION
HYDROFLOAT RESULTS
Shakedown Testing
Shakedown was completed without any considerable difficulties. The shakedown
tests confirmed that the 5 ft x 5 ft CrossFlow could supply sufficient feed to the
conditioner and the 2 ft x 2 ft HydroFloat. Several minor operational problems were
resolved on site. These included replacement of the original pneumatically powered,
stirred-tank conditioner with electric agitators since the plant air system could not deliver
the required air flow and pressure. The electric mixers easily maintained the coarse
phosphate matrix in suspension up to approximately 65% solids. In addition, rectangular
inserts were placed into the conditioner cells to produce an octagonal shape. This
configuration increased efficiency by minimizing the “sanding” in the corners.
The HydroFloat aeration system also required minor alterations to the piping
manifold to ensure consistent distribution of air throughout the teeter-bed. Poor
distribution resulted in channeling through the teeter-bed in localized areas. The air/water
distribution manifold was redesigned (with fewer holes) to resolve this problem.
Detailed Testing
Tests were conducted to evaluate the effect of key operating and design
parameters on the performance of the HydroFloat separator. Variables investigated
included feed injection depth, teeter-water injection spacing, mass feed rate, feed solids
content, water rate, bed depth, aeration rate, and reagent dosage. All tests were conducted
on a classified feed that was nominally 10 x 35 mesh.
25 |
Virginia Tech | rate from 1 scfm to 5 scfm resulted in an increase in BPL recovery and product insols
content. (Note that all air flow values were converted to standard conditions prior for
reporting purposes.) The increase in recovery can be attributed to an increase in the
flotation rate. The increase in flotation rate with gas flow rate is well documented in the
technical literature. An increase in gas flow rate (at the same bubble size) results in a
greater gas flux through the column and, consequently, a greater probability of floatable
solids encountering an air bubble (Schulze, 1984).
The increase in product insols content is attributed to several factors. The first is
simple hydraulic entrainment. The increased gas flow rate results in greater turbulence
within the cell that carries hydrophilic gangue particles into the overflow concentrate. In
addition, some of the phosphate particles at the test site were locked with silica (insols).
Therefore, an increase in phosphate recovery will naturally produce a higher insols
content in the concentrate product. The optimum aeration rate is between 3 and 4 SCFM,
which would maximize the recovery while not largely increasing silica contamination ct.
Frother Dosage. A glycol-type frother (F-507) was used during the HydroFloat
evaluation. According to the data presented in Figure 16, the BPL recovery dropped as
the frother addition rate increased. At 0.35 lbs/ton of frother, the BPL recovery ranged
from 75% to 80%. At 0.80 lbs/ton, however, the BPL recovery was only 67%. The
reduction in recovery is attributed to a decrease in bubble size as frother concentration
increased. Smaller bubbles (<0.5 mm) create bubble/particle aggregates with less
buoyancy when compared to larger bubbles (~1mm). In contrast to conventional flotation
processes, it is believed that the bubble-particle aggregates formed with larger bubbles
33 |
Virginia Tech | As illustrated in Figure 19, the HydroFloat was able to maintain a BPL recovery
averaging 98% at a feed rate exceeding 2.0 tph/ft2. It should be noted that at a feed rate of
2.5 tph/ft2, the capacity of the conditioner (not the HydroFloat) was exceeded. At this
capacity, the poor conditioning caused a decrease in the downstream performance of the
HydroFloat separator. Thus, the maximum capacity of the HydroFloat cell could not be
fully established in the current test program. Nevertheless, the data clearly demonstrate
that the capacity of this new technology is far in excess of that achieved using the
flotation column cells currently used by the phosphate industry.
ROTARY CONDITIONER RESULTS
Shakedown Testing
Figure 20 compares the initial separation results obtained using the rotary and
stirred-tank conditioners for a 10 x 35 mesh feed. The data show that an acceptable
product grade (i.e., <10% insols content) can be obtained using either conditioning
system. The overall recovery, however, was nearly 20% higher for the tests conducted
using the rotary conditioner. The difference in recovery can be attributed to the presence
of slimes generated by the stirred-tank conditioner.
It is important to note that in current plant practice, the conditioner feed size
distribution typically ranges from 10 mesh to 150 mesh. The presence of the fines
fraction (35 x 150 mesh) contributes to an increase in viscosity that helps maintain
coarser solids in suspension. After classification to remove the 10 x 150 mesh material,
however, the 10 x 35 mesh fraction is highly prone to “sanding.” As such, high mixing
speeds are required to maintain the plus 35 mesh solids in suspension when using the
38 |
Virginia Tech | investment of $21 million to convert from sized feed flotation (Medium) to sized feed
flotation (High) would have a 20% internal rate of return for 10 years of operation.
Ore 2. The cost model simulations for Ore 2 were based on the following annual
production statistics presented in Table 7. The same phosphate mine average statistics as
described for the Ore 1 simulations were used in the Ore 2 investigations. The results of
the three flotation scenarios and corresponding production cost estimates are summarized
in Table 8.
The margins, assuming $24/t selling price, for the scenarios are tabulated below in
Table 9. Also shown are the net margins and net present values. The net margins are the
Table 7. Annual Production Statistics for Ore 2.
OOppeerraattiinngg sscchheedduullee 77 ddaayyss // wweeeekk
NNoo.. ooff ddrraagglliinneess 33
AAccrreess mmiinneedd 552288
OOvveerrbbuurrddeenn ssttrriippppeedd 2211,,110000,,000000 bbccyy**
OOrree rreeccoovveerreedd 1155,,110000,,000000 bbccyy**
OOrree ddeennssiittyy 11..118888 ddrryy tt//bbccyy
PPeebbbbllee 11,,881122,,000000 tt//yy
FFlloottaattiioonn ffeeeedd 1111,,110055,,000000 tt//yy
FFeeeedd %%BBPPLL 1155..99
*Based on 2000 TFI Report
Table 8. Summary of Results for Ore 2.
Recovery Scenario
Low Medium High
%BPL Recoveries
Coarse Flotation na 68 92
Fine Flotation na 86 86
Combined Flotation 70.3 77.6 87.6
Concentrate t/y 1,793,000 1,979,000 2,234,000
Production cost/ton $16.95 $16.29 $15.47
49 |
Virginia Tech | SUMMARY
A detailed test program to evaluate the Eriez HydroFloat separator for coarse
phosphate flotation was completed at PCS Phosphate in White Springs, Florida. The
primary objectives of this program were:
• to evaluate the principal operating parameters of the HydroFloat,
• to conduct comparison tests with an open-column flotation cell, and
• to compare a rotary, drum-type conditioner to conventional stirred-tanks for
coarse phosphate (plus 35 mesh) conditioning.
To meet these objectives, nine different controllable variables were examined in the pilot-
scale test program. The following generic observations can be made based on this test
work:
• Increased recovery and product insols were obtained at shallower feed injection
depths.
• Distribution of air/water was improved with increased spacing of water injector
holes and distribution pipes.
• Increased recovery and insols were obtained with increases in the fluidization
(teeter bed) water rate.
• Increased product insols were observed with increasing bed level, while bed level
had little impact on recovery.
• Improved recovery was observed with higher conditioning percent solids, while
no influence on product grade was noted (up to the conditioner capacity limit).
• Increased recovery and product insols were observed with increasing aeration
rate.
• Decreased recovery was observed with an increase in frother addition rate.
• Increased recovery was observed with collector dosage up to an optimum plateau
at 0.7 lbs/ton.
52 |
Virginia Tech | In each case, theoretical explanations can be provided to account for the observed trends
in grade and/or recovery.
Comparison tests were also conducted with a standard open-column cell. The
results indicate that the HydroFloat achieved a higher product recovery at a similar
quality as compared to the open column. Furthermore, the HydroFloat was able to
maintain performance at feed rates in excess of twice that of the standard column. A
summary of results from the comparison testing is provided in Table 10. The most
notable findings are the very high recovery (>98%) and high capacity (>2.5 tph/ft2) of the
HydroFloat cell.
The final objective of the test program was to evaluate a rotary drum-type
conditioner as compared to conventional stirred-tanks for coarse particle conditioning.
These tests were conducted using a 30-inch diameter drum designed by Jacobs
Engineering. Comparison tests were conducted using the HydroFloat separator in
conjunction with the two conditioners. The results from these tests, which are
summarized in Table 11, showed that the rotary drum design dramatically outperformed
the standard stirred-tank conditioner. The drum-type conditioner provided a substantially
higher BPL recovery at an identical product quality. This improvement is attributed to
minimal slimes production in the drum conditioner. Conversely, the stirred-tank style
tends to generate phosphate slimes (minus 325 mesh) that result in a lower recovery and
increased reagent consumption. The increase in phosphate slimes results from the
excessive energy required to maintain the “ultracoarse” feed in suspension without the
fines fraction (35 x 150 mesh).
53 |
Virginia Tech | 2 REAGENTS IN COAL PREPARATION: WHERE DO THEY GO?
Josh Morris, Emily Sarver, Gerald Luttrell, John Novak
Paper peer-reviewed and originally published in proceedings of the 51st Annual Conference of
Metallurgists (Canadian Institute of Mining, Metallurgy and Petroleum), October 1-3, 2012.
Niagara Falls, Ontario, paper 7391. Reproduced with permission of the Canadian Institute of
Mining, Metallurgy, and Petroleum. www.cim.org
1. Abstract
A variety of reagents are utilized in coal preparation, but aside from performing their
desired function relatively little is known about the behavior of these reagents within the
processing circuits. Where exactly do reagents go once dosed? In this paper, we present
preliminary results of partitioning studies on frother (i.e., MIBC) and collector (i.e., petro-diesel)
chemicals commonly used in coal flotation, and examine implications for water management
(e.g., in closed-loop systems). Additionally, we discuss the usefulness of such data in predicting
environmental transport and fate of chemicals – which is currently a top priority for industry.
2. Introduction
The purpose of coal preparation is to upgrade mined coal into more valuable products.
Since coal is primarily used as a fuel source for electricity generation, product specifications are
typically contracted to minimize unwanted constituents that detract from the overall heat value
(e.g., ash and moisture) or that add to environmental pollution or other problems like corrosion at
a power plant (e.g., sulfur) (Pitt and Millward 1979). Failure to meet specifications results in a
financial penalty for the coal producer (Szwilski 1986), and thus preparation processes have
evolved to simultaneously optimize recovery of valuable “clean” coal with rejection of mineral
matter and moisture. In addition to advancements in equipment and circuitry, development and
application of various chemical reagents has dramatically improved the performance of coal
preparation processes.
Contemporary preparation plants typically include multiple circuits that can be
categorized by the size of particles they process: coarse, intermediate, and fine/ultra-fine (Figure
21 |
Virginia Tech | 2.1). Coarse and intermediate circuits generally rely on size classification and gravity separations
(e.g., dense-media cyclones), and do not require significant chemical reagents. However, fine
and ultra-fine circuits often use froth flotation to separate coal from impurities, which requires
chemical additives (Table 2.1). The primary additives include collectors, which coat the surface
of the coal particles to render them (more) hydrophobic and thus more likely to attach to air
bubbles and float; and frothers, which aid in the formation and stability of the froth that will
accumulate the floated coal particles. Modifiers are also commonly added to flotation circuits to
regulate pH in instances where coal or impurity characteristics may change water chemistries
(Laskowski 2001). Following flotation, coagulants and flocculants are often utilized in solid-
liquid separations (i.e., dewatering or clarification) for coal products, and for tailings slurries
prior to their disposal in impoundments. Coagulants function via double-layer compression1 to
bring colloidal particles together, while flocculants promote bridging between the grouped
colloids – and the combined result is enhanced sedimentation (Wills 2006). Defoaming or anti-
foaming agents may also be required to avoid fouling of dewatering operations.
1 Double-layer compression refers to the action of added ionic species on the electrical double layer surrounding a
colloid or fine particle. In the case of negatively charged coal, the addition of a cationic coagulant effectively
reduces the (repulsive) electrostatic forces between particles such that Van Der Waals’ forces may attract the
particles together Scott, J. H. (1976). Coagulation Study of a Bound Water Bulked Sludge. Master of Science,
Virginia Polytechnic Institute and State University..
22 |
Virginia Tech | The goal of this paper is to begin answering these questions. The following sections
review the potential fates and impacts of coal preparation reagents, and present preliminary data
regarding the partitioning of frothers and collectors between coal and process water.
Table 2.1: Common reagents in coal preparation (McIntyre 1974; Knapp 1990; Pugh 1996;
Laskowski 2001)
Type Group Reagent
Fuel Oil No. 1 - Kerosene
Collectors Hydrocarbons Fuel Oil No. 2 - Diesel
Fuel Oil No. 6
Aliphatic Alcohols Methyl Isobutyl Carbinol (MIBC)
Polyglycols DF 250
Dowfroth M150
Frothers
Hydroxylated Nalco 8836
Polyethers Polyoxyl Sorbitan Monolaurate
(PSM)
NaCl
Promoters CaCl
2
Modifiers Na SO
2 4
H SO
pH Regulators 2 4
CaO
Organic Starches
Coagulants (cationic) Inorganic Salts
Polyamines
Dewatering/Clarification Flocculants (non- Organic Starches
Reagents ionic) Polyacrylamide
Organic Starches
Flocculants (anionic) Acrylamide/Acrylate Copolymers
Polyacrylates
Tributyl Phosphate (TBP)
Defoaming Reagents Defoamers
Polydimethylsiloxane (PDMS)
3. Reagent Fates and Implications
Determining the fate of coal processing reagents necessitates tracking those reagents
from their addition points in a preparation plant (e.g., Table 2.1) to some ultimate destination.
Based on a simple materials balance approach, only a fixed number of possibilities exist for
reagents leaving the plant: they may end up with the clean coal products, with the tailings by-
products, or with recycled water, or they may be lost (e.g., via volatilization or spills).
24 |
Virginia Tech | 3.1 Environmental Fate and Transport
The environmental fate and transport of processing reagents has been scarcely examined.
It is generally expected that collectors (e.g., petro-diesel) substantially partition to coal products
because their chemistry promotes sorption to the coal particles (Watts 1998). Any collector that
does not sorb may remain with water, either floating on the water surface, as an emulsion, or as a
dissolved species – although water solubility is likely low. Frothers, on the other hand, are not
expected to significantly sorb to coal (or other solids), and thus should follow water streams.
Alcohol-based frothers like MIBC tend to have relatively low water solubility and low to
moderate volatility (Howard 1993), which indicate that they may remain at the water-air
interface; whereas glycol-based frothers like Dowfroth M150 are much more soluble in water
and are relatively non-volatile. Coagulant and flocculant reagents are of course expected to
partition to fine coal or tailings particulates, at least in the short-term. These chemicals may well
remain with dewatered coal products; but in the case of reagents associated with tailings solids, it
is difficult to predict how they might react or mobilize under disposal facility conditions.
Reagents that partition to coal products are likely to be combusted with the coal – unless
they volatilize during handling and transport. The combustion by-products of the reagents may
enter the atmosphere as either gaseous or particulate emissions, which may then be returned to
the earth via either wet or dry deposition. In the case of petro-diesel collector (termed “diesel” in
this paper), for example, it is expected that much of the alkane fraction2 will be completely
combusted and converted to carbon dioxide and water; however, PAHs that occur naturally in
the diesel or that form as a result of incomplete combustion might also be released.3 In addition
to atmospheric emissions, reagents or combustion by-products of reagents might become part of
the solid fly ash (i.e., waste from coal combustion) and eventually disposed (e.g., in landfills),
either because the reagents were associated with the mineral fraction (i.e., noncombustible) of
the coal or because their aerosols were scrubbed from flue gases. In the example of diesel
2 Diesel is not a specific compound, but rather a range of compounds collected from fractional distillation of
petroleum (i.e., between 200-400 °C). Its general composition includes primarily moderate weight alkanes (i.e., C -
15
C ), and also cycloalkenes and polyaromatic hydrocarbons (PAHs) Watts, R. J. (1998). Hazardous Wastes: Sources,
25
Pathways, Receptors. New York, NY, John Wiley and Sons..
3 PAHs are an environmental concern because they pose human and ecological health risks ATSDR. (2009). 2012..
However, the bioavailability of PAHs derived from diesel combustion is not well understood Scheepers, P. and Bos,
R. (1992). "Combustion of diesel fuel from a toxicological perspective. II. Toxicity." Int Arch Occup Environ
Health 64(3): 163-177..
25 |
Virginia Tech | collector that partitions to coal products, this is another likely scenario for some PAHs (Liu et al.
2008). Following atmospheric deposition or disposal of fly ash, coal processing reagents or their
by-products could move through terrestrial and aquatic ecosystems via hydraulic or biologic
transport processes.
For reagents that partition to either the water or solid fractions of coal tailings,
environmental fate and transport is heavily dependent on the tailings disposal conditions. If
tailings are disposed via underground injection, reagent fate will be governed by chemical
conditions of the storage cavity (i.e., atmosphere, water chemistry, and wall rock mineralogy);
and reagent transport will depend on the degree to which groundwater interacts with the cavity.
More often, tailings are disposed above ground in impoundments or ponds, where the water
fraction is expected to clarify as the solid particles slowly settle. Some of the water is generally
recycled back to the preparation plant and used as make-up water, but a portion of it is released
to the environment via evaporation, engineered discharges (i.e., through decant structures or
spillways) (MSHA 2009), or percolation to the subsurface since impoundments for coal refuse
are rarely lined (USEPA 1999). If reagents or reagent by-products are present in impoundments,
water releases could possibly mobilize them. Other possibilities include photo- or bio-
degradation within the impoundment (e.g., MIBC), or sorption to soils beneath the impoundment
(e.g., diesel).
In the context of environmental fate and transport, it is also important to note that coal
processing reagents are seldom pure products with constant composition. For instance, diesel can
vary with the properties of the petroleum feedstock used to produce it, and some frother reagents
are actually acquired as by-products from the manufacture of other products (e.g., brake fluids).
While variability in reagent quality will not be discussed in detail here, it is a topic that deserves
further attention.
3.2 Residuals in Operations
In addition to tracking processing reagents to better understand environmental
implications, it is becoming increasingly important to understand implications for preparation
plants that utilize large volumes of recycled water. Use of closed water systems (i.e., zero
discharge from site) is growing in response to calls for both water efficiency and water resource
protection. For coal preparation facilities, such systems generally combine the plant and tailings
26 |
Virginia Tech | water circuits, such that “clear” water from an impoundment is recycled back to the plant as
makeup water. Water may also be recycled within the plant (e.g., from the coal product thickener
back to cyclone or flotation circuits).
To the extent that processing reagents (or their by-products) remain in the recycled water,
chemical concentration may have significant impacts on plant operation. While residual
chemicals could potentially reduce the rate of new chemical addition in some cases, it is also
possible that reduced efficiency or fouling of some unit processes may occur. For example,
residual frothers may impact processes that cause significant agitation (e.g., dense media cyclone
separations) (Lahey and Clarkson 1999), or where water chemistry promotes foaming (e.g.,
where recycling has caused increased salt concentrations). Even at sites where only a portion of
water is recycled throughout the plant, it is already well established that such problems lead to
preventative under-dosing of frother in flotation circuits, which in turn sacrifices recovery of fine
coal (Coffey and Lambert 2008). For closed water systems, the implications may be far more
significant, and additional water treatment efforts might be required to maintain efficient
operations.
In light of the environmental and operational implications of processing reagent fates, it
is important to understand how they partition between solid and liquid fractions in preparation
plants.
4. Experimental Methods
Partitioning studies were carried out to obtain preliminary data on the potential fates of
common frother and collector reagents for fine coal flotation4. The frothers were MIBC,
polyoxyl sorbitan monolaurate (PSM), Dowfroth M150, and Nalco 8836, and the collector was
diesel. Raw coal samples were ground using a laboratory hammer mill, and sized by wet
screening for the desired test conditions (Tables 2.2 to 2.4). Full proximate analysis was not
conducted on any of the raw coal samples, however approximate ash contents were determined
(see below). For each test, a slurry sample was prepared by adding the required weight of sized
raw coal to distilled water, followed by the required volume of reagent. Slurries were mixed for a
4 The frother partitioning tests were partially reported in an MS thesis (Knapp, 1991), but have not been published
elsewhere.
27 |
Virginia Tech | specified contact time, and then the coal particles were separated from the water by either
centrifuging or filtration. Finally, the water was then analyzed for residual reagent.
It should be noted that range of test conditions (i.e., frother and collector dosages, and
coal slurry solid to liquid ratios) included in this work is much wider than that which may be
encountered in practice. This is because as a major objective here was to determine under what
conditions the processing reagents would sorb to coal versus remain in water, and vice versa. For
the purpose of making relative comparisons, a froth flotation circuit in a typical coal preparation
plant might operate with coal slurries of 1-10% solids (by weight), which require 4-20 μL/L
frother (usually specified in mg/L; ~5-25 mg/L) and 1.5-150 μL/L (usually specified in lb/ton of
coal; ~0.5-5 lb/ton).
4.1 Frother Partitioning
For the frother partitioning tests, coal samples were obtained from the Elkhorn #3 and the
Cedar Grove seams (both <5% ash), and were sized to -100 mesh prior to testing. Slurries were
mixed for five minutes by rapid stirring in open beakers, and then centrifuged for three minutes.
To analyze the relative amount of frother left in the clear water fraction of the slurry, surface
tension measurements were conducted using a Fisher surface tensiometer. The tensiometer
utilizes a platinum-iridium ring, and measures the force required to detach this ring from the
liquid surface. The ring was thoroughly cleaned between tests by immersing it in benzene, then
acetone, and finally passing it through a flame to remove of any surface contaminants. Glassware
was also thoroughly cleaned between tests by washing with chromic acid solution and distilled
water.
4.2 Collector Partitioning
For the collector partitioning tests, two separate raw coal samples were obtained: one
from the Hagy Seam (~ 35% ash), and one from Pocahontas Seam (~ 16% ash). The former
sample was sized to -100 mesh for the first set of tests, and then a subsample of that material was
screened to 100 x 150 mesh for the second set of tests. The later sample was only used in the
second set of tests, and was also screened to obtain 100 x 150 mesh particles. For the first set of
tests, slurries were mixed in a kitchen blender for four minutes and then centrifuged until the
water was clear; however, it should be noted that a large amount of colloidal matter in these
28 |
Virginia Tech | samples prevented removal of all color from the water. In the second set of tests, the slurries
were mixed in open flasks on a shaking table for four minutes, and then filtered (through 25 μm
paper) using a vacuum pump. The residual diesel in the clear water fraction from each test was
analyzed using an Agilent 5890 gas chromatograph equipped with a flame ionization detector
(GC-FID), by following EPA Method 3150 for quantifying diesel range organics (DRO) in water
samples.
Table 2.2: Experimental conditions for frother tests
Coal Dosage Frother Dosage
Test Coal Seam Frother Type
(wt. % solids) (μL/L)
1-18 Elkhorn #3 0, 0.1, 0.5, 0.7 M150 0.4a, 4, 40, 400, 4000
19-34 Elkhorn #3 0, 0.1, 0.5, 0.7 PSM 4, 40, 400, 4000
35-48 Elkhorn #3 0, 0.1, 0.5, 0.7 Nalco 8836 4, 40b, 400, 4000
49-60 Elkhorn #3 0, 0.1, 0.5, 0.7 MIBC 10, 100, 1000
61-64 Cedar Grove 0.5 M150 4, 40, 400, 4000
65-68 Cedar Grove 0.5 PSM 4, 40, 400, 4000
69-72 Cedar Grove 0.5 Nalco 8836 4, 40, 400, 4000
73-75 Cedar Grove 0.5 MIBC 10, 100, 1000
a Only for 0 and 0.1% solids
b Only for 0 and 0.5% solids
Table 2.3: Experimental conditions for first set of collector tests
Coal Diesel
Diesel Residual
Coal Dosage d o s a g e Solid/Liquid
Test dosage DRO
Seam (wt. % (lb/ton Separation
(mg/L) (mg/L)
solids) coal)
1 Hagy 0 N/A 500 Centrifuge 425.1
2 Hagy 1 0 0 Centrifuge <0.05
3 Hagy 1 1 4.9 Centrifuge 0.39
Centrifuge,
Hagy 1 1 4.9 0.42
4 then filtration
5 Hagy 1 1 4.9 Filtration 0.46
6 Hagy 1 10 50 Centrifuge 0.68
7 Hagy 5 0.25 6.3 Centrifuge 0.50
8 Hagy 5 1 25 Centrifuge 0.53
9 Hagy 5 10 250 Centrifuge 0.95
29 |
Virginia Tech | Table 2.4: Experimental conditions for second set of collector tests
Coal Diesel
Diesel Residual
Dosage d o s a g e Solid/Liquid
Test Coal Seam dosage DRO
(wt. % (lb/ton Separation
(mg/L) (mg/L)
solids) coal)
10 Pocahontas 0 N/A 0.85 N/A 1.35
11 Pocahontas 0 N/A 0.425 N/A 0.63
12 Pocahontas 1 0.17 0.85 Filtration 0.42
13 Pocahontas 10 0.017 0.85 Filtration 0.31
14 Pocahontas 5 10 250 Filtration 0.47
15 Pocahontas 5 10 250 Filtration 0.40
16 Pocahontas 5 10 250 Filtration 0.50
17 Pocahontas 5 10 250 Filtration 0.51
18 Pocahontas 5 10 250 Filtration 0.47
19 Pocahontas 5 10 250 Filtration 0.42
20 Pocahontas 1 50 250 Filtration 0.79
21 Pocahontas 5 50 1250 Filtration 1.02
22 Pocahontas 10 50 2500 Filtration 1.92
23 Pocahontas 5 1 25 Filtration 0.49
24 Hagy 5 10 250 Filtration 0.88
25 Hagy 5 50 1250 Filtration 2.67
5. Results and Discussion
Results of the partitioning tests confirmed that, in general, frother and collector reagents
do not partition completely to either the solid or liquid fraction of a coal slurry – and therefore it
is possible that, to some extent, these reagents may end up in coal products, tailings
impoundments and in recycled water.
5.1 Frother Adsorption
The surface tension results for varying frother dosages and varying coal slurries are
shown in Figure 2.2. The dashed horizontal line at 72.8 dyne/cm represents the theoretical
surface tension of pure water (Nave); the bold line shows the measured surface tension for
frother only (no coal added). For all frothers, it appears that the reagent tends to sorb somewhat
to the coal surface. This can be seen most clearly at moderate test dosages (i.e., 40-400 μL/L),
30 |
Virginia Tech | where a significant difference was observed in surface tension between tests with frother only
and tests with frother and coal. As expected, more frother generally tended to sorb to coal when
more coal was present (i.e., 0.7% vs. 0.1% solids).
At very high dosages (i.e., 1000-4000 μL/L), the effect of the coal becomes less
significant for MIBC and Dowfroth M150, and nearly insignificant for PSM and Nalco 8836.
This indicates that sorption sites on the coal surface may be completely filled, and thus most of
the frother remains in the water. At very low test dosages (i.e., 4 μL/L), the PSM exhibits
seemingly complete sorption to the coal particles, as the surface tension when coal is present is
effectively that of pure water, as compared to substantially less with frother only. The Dowfroth
M150 also exhibits significant sorption to the coal at very low dosages, although the surface
tension is slightly less than that of pure water (for the 0.5 and 0.7% coal tests), which suggests
that some frother did not sorb. At very low dosages of MIBC and Nalco 8836 (i.e., 10 and 4
μL/L, respectively), it is uncertain to what extent the coal particles were able to sorb frother
because the frother did not depress the surface tension of the water. This highlights a major
shortcoming of the use of surface tension measurements to study frother reagents, which has
been previously noted by other researchers (Sweet et al. 1997).
Coal properties were found to play a role in the sorption behavior of PSM and MIBC. As
evident in Figure 2.2, at equal levels of slurry solids (i.e., 0.5% coal), the Cedar Grove coal did
not appear to significantly sorb these frothers, whereas the Elkhorn #3 coal did. However, the
sorption behavior of the Dowfroth M150 and Nalco 8836 was observed to be quite similar
between the two coals. Since proximate analysis was not performed on the coal samples, it is
difficult to speculate on specific explanations for these results; but coal properties (other than
particle size) do seem to be important in terms of frother sorption capacities.
In the context of a coal preparation plant, the results from these tests indicate a significant
degree of frother sorption to coal surfaces can be anticipated. While practical conditions include
only the low to very low ranges of frother dosages tested here, they typically have higher slurry
solids contents, and thus higher coal surface areas – which suggests that perhaps a relatively
large fraction of frother reagents may associate with the coal. Given that frothers are well known
to cause problems via entrainment in recycled water, there may be several plausible explanations
for the findings presented here: 1) frother sorption to coal may only be temporary, and desorption
may occur downstream of flotation processes (e.g., during dewatering); 2) the presence of other
31 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.