University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
UWA
|
4.3. METHODS
Figure 4.3: CaffeNet CNN model training (double-cascade). Input marker trajectories flat-
tened to color images, output KJM 6-way deinterlaced PCA reduced.
(a) The input marker trajectories for each trial total 3,000 features (columns of data).
(b) Input data is centered, scaled, and reshaped, then (c) flattened (warped) into 227 × 227 pixel images.
(d) The corresponding modeled KJM output has 540 features.
(e)Thenumberofoutputfeaturesisreducedfirstbydeinterlacing(splitting)theKJMintotheirsixcomponentwaveforms,
each 90 features, then (f) reduced again using PCA (t=0.999). Subsequent processes are executed for each of the six
waveforms (x 6).
(g) The HDF5 file format (hdfgroup.org) is used to pass the large data structures (input ‘data’, output ‘label’) into the
CNN.
(h) In this example, weights from an earlier CaffeNet GRF/M model are used to help improve the accuracy of the new
model (double-cascade).
(i) The new model weights are used for KJM prediction from the test-set.
(internal moments LKJM , LKJM , LKJM , RKJM , RKJM , and RKJM ), each of which was
x y z x y z
thenfurtherreducedusingPrincipalComponentAnalysis(PCA).PCAwithatuningthresholdt=0.999
was selected for its compression accuracy with this data type [Pearson, 1901], which for example for
the sidestep (right stance limb) resulted in RKJM (internal/external rotation moment) reducing
z
from 90 to 59 features (output: response samples × 59). The PCA compression was repeated for each
of the six deinterlaced waveforms. Rather than the native CaffeNet object classification, the fine-tuned
model was required to perform multivariate regression, i.e. produce waveform outputs. The fine-tuning
process by definition requires the CNN architecture to exactly match that used during original training,
with the exception of allowing the final loss branches to be sliced off and replaced with another, exactly
for such purposes as required by this study. The CaffeNet architecture was modified by replacing
the final 1,000-dimensional SoftMax layer with a Euclidean loss layer (with dimensions as defined
dynamically by the output downsampling) thereby converting it from a classification network into a
multivariate regression network.
66
|
UWA
|
4.3. METHODS
Figure 4.4: Training-set eight marker trajectories, sidestep left movement type (off right
stance limb). Combined 1,222 predictor samples (80 % of 1,527), viewed as a conventional 3D volume
space and with one trial highlighted.
Fine-tuning deep learning networks allows smaller
downstream sample sizes to leverage weighting rela-
tionships built on earlier training at scale [Zamir et al.,
2018]. In the current study, new model training was
carried out either via a single fine-tune or a double-
cascade from pre-trained CNN models. The single fine-
tune models were created from the CaffeNet source,
itself pre-trained on the ImageNet database, and then
trained on the KJM training-set. The double-cascade
models were created from variants of the CaffeNet
model, pre-trained using an earlier GRF/M training-
set, and subsequently trained on the KJM training-
set (i.e. fine-tuned twice). The donor weights for the Figure 4.5: The spatio-temporal marker
double-cascadewereprovidedbythestrongestGRF/M trajectoriesinFigure4.4werepresented
to the CNN as 1,222 individual color im-
model (single fold, 33 % stance, sidestepping, right
ages.
stance limb) selected from earlier prototypes.
The study training and test-sets were formed from
a random shuffle and split of the complete data-set of marker-based motion capture input (predictor)
to KJM output (response). As per data science convention, the majority of results were derived from
a single 80:20 fold, however early criticism suggested that the model was overfitting to this one fold
67
|
UWA
|
4.4. RESULTS
[Domingos, 2012]. Therefore, within time and resource constraints, regression for one movement type
was tested over 5-folds, and because of its relevance to knee injury risk but also being the movement
type with the largest number of samples, the sidestep movement (right stance limb) was selected for
this additional investigation. Testing over 5-folds was achieved with 80:20 splits whereby each member
of the data-set was guaranteed to be a participant of each of the five test-sets only once. The mean
of the 5-folds experiments was then compared with the single fold results. The prediction occurred
over the initial 33 % of a time normalized stance phase, the period selected for its relevance to injury
risk, and was reported for the three categorized sports-related movement types: walking, running,
and sidestepping. The precision of the outcome of all the experiments was assured by comparing the
six vector KJM predicted by the CaffeNet regression model (fine-tune or double-cascade) with those
calculated by inverse dynamics using both correlation coefficient r and rRMSE [Ren et al., 2008].
4.4 Results
Of the single fine-tune investigations, the strongest mean KJM correlation was found for the left stance
limb during sidestepping r(LKJM ) 0.9179 (shown bolded, Table 4.2), and the weakest for the
mean
right stance limb also during sidestepping r(RKJM ) 0.8168.
mean
Applying the double-cascade technique caused all but one correlation to improve compared with
the single fine-tune, and also the league table of results to reorder. The strongest mean correlation
remained the left stance limb in sidestepping r(LKJM ) 0.9277 (bolded, Table 4.3), a rise of
mean
+1.1 %, and with individual contributing components r(LKJM ) extension/flexion 0.9829, r(LKJM )
x y
abduction/adduction 0.9050, r(LKJM ) internal/external rotation 0.8953. However, the greatest
z
improvement from the double-cascade was observed for the right stance limb in sidestepping, for which
the earlier r(RKJM ) 0.8168 was improved to r(RKJM ) 0.8512, a significant increase of
mean mean
+4.2 % (p < 0.01 [Mann and Whitney, 1947]) over the single fine-tune, from components r(RKJM )
x
extension/flexion 0.9865, r(RKJM ) abduction/adduction 0.8368, r(RKJM ) internal/external rota-
y z
tion 0.7304. The double-cascade technique resulted in a mean increase of +1.8 %, and the sidestepping
pair (average of both stance limbs) provided the strongest overall mean correlation r(KJM ) 0.8895.
mean
For having the largest number of samples, sidestepping (right stance limb) was investigated
further. First, by cross-validation over five k-folds, for which the similarity of the average correlation
r(RKJM ) 0.8472 compared with the single fold analysis 0.8512 indicated overfitting had been
mean
avoided. Second, by illustration, the output KJM training-set, the test-set predicted response min/max
rangeandmean,andthecomparisonforthecorrespondingtestsamplewiththestrongestr(RKJM )
mean
(Figure 4.6).
4.5 Discussion
Although the uptake of deep learning continues to increase across all disciplines, examples of modeling
biomechanics with CNN fine-tuning are rare with researchers preferring to explore linear models, or
attempting to train deep learning models from scratch. This study provides an end-to-end example
which illustrates the process to repackage sports biomechanics data (flattening spatio-temporal input,
68
|
UWA
|
4.6. CONCLUSIONS
moment r(KJM ). One reason the GRF/M models demonstrate excellent accuracy from only 2,196
z
data-set samples (not millions) is because of the big data inherent to the ImageNet pre-training of
the CNN model, but this also provides an indication of the minimum number of samples required for
this method to improve. Increasing the number of KJM samples available with each movement type
to match or exceed the GRF/M model is expected to result in a concomitant improvement in KJM
model accuracy and consistency. Regardless, this is currently the largest biomechanical study in terms
of the number of samples, and the time period over which the data was collected.
4.6 Conclusions
The accurate estimate of KJM directly from motion capture, without the use of embedded force
plates and inverse dynamics modeling procedures, is a novel approach for the biomechanics community.
Through a unique combination of deep learning data engineering techniques, this study was able to
extract value from legacy biomechanics data otherwise trapped on legacy external hard-drives. Refining
the movement type classification, and using a larger number of trial samples, will improve the relevance
and robustness of the model, and serve as a precursor to real-time on-field use. Building on earlier
GRF/M modeling using deep learning driven purely by motion data, the current study predicts KJM,
and with significantly improved correlation performance using the double-cascade technique. These are
important and necessary incremental steps for the goal of in-game biomechanical analysis, and which
lay the foundation for future work to drive the multivariate regression not from markers, but from
a small number of similarly-located accelerometers. When this is accomplished, relevant, real-time,
and on-field loading information will be available. For the community player, this approach has the
potential to disrupt the wearable sensor market by enabling next-generation classification of movement
types and workload exposure. For professional sport, an understanding of on-field KJM could be used
to alert coaches, players and medical staff about the real-time effectiveness of training interventions
and changes to injury risk.
4.7 Acknowledgements
This project was partially supported by the ARC Discovery Grant DP160101458 and an Australian
Government Research Training Program Scholarship. NVIDIA Corporation is gratefully acknowledged
for the GPU provision through its Hardware Grant Program. Portions of data included in this study
have been funded by NHMRC grant 400937.
4.8 References
M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,
and M. Devin. TensorFlow: Large-scale machine learning on heterogeneous distributed systems.
arXiv:1603.04467, 2016.
A. J. Aljaaf, A. J. Hussain, P. Fergus, A. Przybyla, and G. J. Barton. Evaluation of machine learning
71
|
UWA
|
Chapter 5
Multidimensional ground reaction
forces and moments from wearable
sensor accelerations via deep learning
Manuscript introduction
This manuscript was submitted on March 28, 2019, for initial review by IEEE Transactions on
Biomedical Engineering as a sequel to study two published in the same journal, and the preprint
released via arXiv on March 19, 2019.
Johnson, William R., Mian, Ajmal, Robinson, Mark A., Verheul, Jasper, Lloyd, David G.,
& Alderson, Jacqueline A. (2019). Multidimensional ground reaction forces and moments
from wearable sensor accelerations via deep learning. IEEE Transactions on Biomedical
Engineering, (in review), 1–11.
Johnson, William R., Mian, Ajmal, Robinson, Mark A., Verheul, Jasper, Lloyd, David G.,
& Alderson, Jacqueline A. (2019). Multidimensional ground reaction forces and moments
from wearable sensor accelerations via deep learning. arXiv:1903.07221 [cs.CV], 1–17.
Supplementary material is available on GitHub.
Linking statement
Previous studies demonstrated success using pseudo-linear (Partial Least Squares, study one) and
non-linear models (deep learning convolutional neural networks, study two) to predict multidimensional
biomechanical outputs (ground reaction forces and moments, and knee joint moments) conventionally
acquired using captive analog laboratory equipment from marker-based motion capture. This final
translation study seeks to complete the goal of non-invasive estimation of kinetics, by using sensor
accelerations as the input to drive the model predictions.
77
|
UWA
|
Multidimensional ground reaction
forces and moments from wearable
sensor accelerations via deep learning
5.1 Abstract
Objective: Monitoring athlete internal workload exposure, including prevention of catastrophic non-
contact knee injuries, relies on the existence of a custom early-warning detection system. This
system must be able to estimate accurate, reliable, and valid musculoskeletal joint loads, for sporting
maneuvers in near real-time and during match play. However, current methods are constrained
to laboratory instrumentation, are labor and cost intensive, and require highly trained specialist
knowledge, thereby limiting their ecological validity and volume deployment. Methods: Here we
show that kinematic data obtained from wearable sensor accelerometers, in lieu of embedded force
platforms, can leverage recent supervised learning techniques to predict in-competition near real-time
multidimensional ground reaction forces and moments (GRF/M). Competing convolutional neural
network (CNN) deep learning models were trained using laboratory-derived stance phase GRF/M data
and simulated sensor accelerations for running and sidestepping maneuvers derived from nearly half
a million legacy motion trials. Then, predictions were made from each model driven by five sensor
accelerations recorded during independent inter-laboratory data capture sessions. Results: Despite
adversarial conditions, the proposed deep learning workbench achieved correlations to ground truth, by
GRF component, of vertical 0.9663, anterior 0.9579 (both running), and lateral 0.8737 (sidestepping).
Conclusion: Thelessonslearnedfromthisstudywillfacilitatetheuseofwearablesensorsinconjunction
with deep learning to accurately estimate near real-time on-field GRF/M. Significance: Coaching,
medical, and allied health staff can use this technology to monitor a range of joint loading indicators
during game play, with the ultimate aim to minimize the occurrence of non-contact injuries in elite
and community-level sports.
Keywords: Biomechanics · Wearable sensors · Simulated accelerations · Workload expo-
sure · Sports analytics · Deep learning.
78
|
UWA
|
5.2. INTRODUCTION
5.2 Introduction
One of the perpetual problems facing sports biomechanists is the difficulty translating the accuracy and
multidimensional fidelity of laboratory-based measurements and downstream analysis into the sporting
arena [Chiari et al., 2005; Elliott and Alderson, 2007]. In pursuit of the monitoring of the multiple
contributors to player welfare, of acute and chronic injury risk plus external and internal workload
exposure, coaches today are forced to make local interpretations of surrogate measures [Boudreaux
et al., 2018; Bradley and Ade, 2018; Rossi et al., 2018]. Traditional outputs of biomechanical analyses,
ground reaction forces and moments from embedded force plates, and for example knee joint moments
(KJM) from calculations of inverse dynamics, which could be included as key moderators to the
monitoring ensemble have so far been captive to the laboratory [Calvert et al., 1976; Coutts et al., 2007;
Gabbett, 2018; Matijevich et al., 2019; Vanrenterghem et al., 2017]. Using catastrophic non-contact
knee injuries as an example, there is a gap between the understanding of the mechanisms of anterior
cruciate ligament injury, and the ability to monitor the collection of associated risk parameters during
a game [Besier et al., 2001; Chinnasee et al., 2018; Dempsey et al., 2007; Johnson et al., 2018a].
The traditional approach to biomechanical analysis begins with laboratory retro-reflective optical
motion capture recorded in synchronization with analog force plate output [Chiari et al., 2005; Lloyd
et al., 2000]. The University of Western Australia holds a legacy archive of movement data, and
this was considered an advantage and enabler for the current data science investigation. The major
advantage of inertial measurement units (IMU) over optical motion capture is the relative ease of
on-field application away from the laboratory, however, there are several limitations to the currently
accepted linear processing of their telemetry output. An IMU typically contains three discrete devices:
an accelerometer (linear acceleration); gyrometer (angular velocity); and magnetometer (orientation)
[Camomilla et al., 2018]. These IMU sensors are often used alongside global positioning system (GPS)
trackers in a combined unit which allows positional information (facilitating game strategy and tactical
analysis) to be included in workload exposure estimations [Buchheit et al., 2014; Gabbett, 2018;
Hennessy and Jeffreys, 2018; Vanrenterghem et al., 2017]. In processing IMU outputs, linear statistics
tend to be based on gross assumptions, which for example can mistake overfitting for personalization
[Bradley and Ade, 2018; Callaghan et al., 2018; Gabbett, 2016; Rossi et al., 2018; Wundersitz et al.,
2013]. Scientific investigation to employ IMU for movement classification and load estimation has
so far shown more success with basic movements and/or unidimensional GRF analysis [Clermont
et al., 2018; Pham et al., 2018; Thiel et al., 2018; Verheul et al., 2018]. The IMU hardware also
has inherent physical characteristics and design features which need to be carefully controlled. The
three sensors have relative or independent coordinate systems, and vendors use proprietary algorithms
based on Kalman filters [Camomilla et al., 2018; Karatsidis et al., 2016; Luinge and Veltink, 2005] and
custom orientation calibration [Lebel et al., 2016; Lipton, 1967; Picerno et al., 2011] to determine the
device position with respect to the laboratory global origin. Both the accelerometer and gyrometer
are susceptible to linear (or quadratic) drift depending on the application of integration calculations
[Camomilla et al., 2018]. The magnetometer is affected particularly by the proximity of ferromagnetic
materials which can be a problem with laboratory and field equipment [Ancillao et al., 2018; Camomilla
et al., 2018]. One common error is the misinterpretation of IMU results during treadmill activities
where anteroposterior acceleration is naturally minimized [Clermont et al., 2018; Jie-Han et al., 2018;
79
|
UWA
|
5.2. INTRODUCTION
Koska et al., 2018; Wouda et al., 2018]. Wearable devices are also prone to task-dependent fixation
and skin artefacts, in other words powerful movement types necessitate a more stable attachment to
the body, for example throwing or explosive change of direction activities, or any movement where the
IMU is at the distal end of the moment arm [Camomilla et al., 2018; Karatsidis et al., 2016]. All these
issues are compounded when multiple devices are deployed per participant, each of which must be
synchronized, and where bandwidth to a Bluetooth or Wi-Fi bridge is shared. In a team situation, one
of the most challenging problems is the logistics of managing the consistency of device hardware and
software versions [Buchheit et al., 2014; Nicolella et al., 2018]. In short, there are many reasons to
prefer IMU devices over optical motion capture, however, their use comes with a set of constraints and
limitations, some of which have remained difficult to solve.
An emerging alternative method of processing IMU data
output is deep learning (or deep neural network, DNN),
(a) 3D trajectories flattened to 2D image over
which is a type of an artificial intelligence system based on a
stance-normalizedgaitcycle.
learning model rather than a task-specific algorithm [LeCun
(b) Multivariate regression CNN via Euclidean
et al., 2015]. The successful deployment of DNN machine final-layersurgery.
learning for practical biomechanical applications benefits (c) Double-cascade CNN from marker-GRF/M
modelweights.
from a multidisciplinary sport science and computer science
(d) Simulated 3D accelerations from motion cap-
approach and early researchers have applied this technology
turearchive.
with IMU to classify gait, predict vertical GRF (F ); or
z
(e) Automaticre-orientationofaccelerations.
segment orientation [Ancillao et al., 2018; Hu et al., 2018;
Jacobs and Ferris, 2015; Jie-Han et al., 2018; Wouda et al.,
Figure 5.1: Deep learning work-
2018; Zimmermann et al., 2018]. Recent CNN models, e.g. bench for biomechanics.
AlexNet and ResNet, are highly successful at classifying
image contents [He et al., 2015; Krizhevsky et al., 2012],
and it is possible using fine-tuning (transfer learning) to leverage these existing CNNs for related
applications and from fewer training samples (i.e. thousands instead of millions) with concomitant
reductions in CPU and GPU processing cost.
A major step towards model deployment and acceptance in the field is proving its accuracy and
validity in sub-optimal or adversarial conditions. Previous work has tested CNN models using a
conventional 80:20 split of homogeneous archive movement data to predict three dimensional (3D)
GRF/M and KJM from marker trajectories. This was achieved by building a “deep learning workbench”
which (a) flattened 3D marker trajectories to 2D images in order to allow fine-tuning of image
classification deep models; (b) transplanted Euclidean loss into the final CNN layer to facilitate
multivariate regression; and (c) exploited improvements in downstream KJM model accuracy by
leveraging earlier GRF/M success [Johnson et al., 2018a, 2019] (Figure 5.1). The current investigation
began by investigating model performance using a training-set of simulated accelerations, against a test-
set of recorded sensor accelerations, both with corresponding GRF/M. This required the workbench to
be extended to (d) synthesize accelerations from marker trajectories, and (e) to automatically re-orient
independent acceleration coordinate systems so that they are aligned with the global coordinates.
The contribution of this study is to investigate the resilience of the workbench when faced with a
test-set of sensor accelerations recorded independently of the primary researcher (and inter-laboratory),
80
|
UWA
|
5.3. METHODS
Figure 5.2: Study overall design.
thus providing a real-world scenario and reducing the possibility of home-game advantage or bias.
Because the telemetry was provided by another laboratory, calibration parameters such as coordinate
system, direction of travel, number, type and location of sensor accelerometers, even make and model
of motion capture and force plate systems, required subsequent data preparation and representation to
be more generalized. Prediction analysis was carried out using the Caffe deep learning framework [Jia
et al., 2014] on two different CNN models, CaffeNet (a derivative of AlexNet) and ResNet, both via
double-cascade learning, using weights from earlier marker trajectories to GRF/M models, themselves
fine-tuned from ImageNet source big data [He et al., 2015; Krizhevsky et al., 2012]. The CNN models
were trained using accelerations simulated from an archive of marker trajectory data captured at The
University of Western Australia (UWA, Perth, Western Australia), and tested with sensor accelerations
recordedatLiverpoolJohnMooresUniversity(LJMU,Liverpool, UK).Theaccuracyandvalidityofthe
approach was tested by reporting correlations between CNN predicted and ground truth GRF/M over
100 % of time-normalized stance for two sports-related movement patterns, running and sidestepping.
The hypothesis was that CNN models can establish the location of sensor accelerometers via the
signature pattern of 3D accelerations, and that this would be demonstrated by mean GRF and GRM
correlations > 0.80 across all movement types and stance limb combinations. It was anticipated that
the results of this study would add to the understanding of the performance of CNN models driven
by 3D accelerations, and contribute to future practitioners’ placement of sensor accelerometers for
optimum results.
5.3 Methods
5.3.1 Design & setup
The overall design of the study is shown in Figure 5.2. For the current investigation, the training
and test data sources were quite different. A UWA archive of marker trajectories and GRF/M data
from a 17–year period from 2001–2017 was used to train the CNN models. Gathered from multiple
biomechanics laboratories, the training data files selected from the total of 458,372 shared common
optical motion capture setup (12–20 Vicon camera models MCam2, MX13 and T40S; Oxford Metrics,
Oxford, UK), analog force plate configuration (Advanced Mechanical Technology Inc, Watertown, MA),
data capture software (Vicon Workstation v4.6 to Nexus v2.5), and young adult athletic participant
81
|
UWA
|
5.3. METHODS
cohort (male 59.9 %, female 40.1 %, height 1.770 ± 0.101 m, and mass 74.9 ± 34.1 kg). The UWA
optical marker set has varied over this period from 24–67 passive retro-reflective markers. However, for
this investigation a subset of five markers were used (sacrum SACR; bilateral thigh xTH2, and tibia
xTB2, UWA naming convention), selected for their systematic correspondence to the sensor locations
in the test-set (Figure 5.3).
The test-set was derived from multi data
capture sessions conducted between Novem-
ber 2017 to February 2018 at LJMU using
Visual3Dv6.01(C-MotionInc, Germantown,
MD). Motion capture was recorded via ten
Qualisys Oqus 300+ cameras (Qualisys Inc,
Gothenburg, Sweden), and GRF/M with a
Kistler 9287B force platform (Kistler Hold-
ing AG, Winterthur, Switzerland). Five No-
raxon DTS-3D 518 accelerometers (Noraxon
Inc, Scottsdale, AZ) were attached to each
of five team-sport athletes (male 80.0 %, fe-
male 20.0 %, height 1.829 ± 0.080 m, and
mass 75.6 ± 11.1 kg) at locations selected
Figure 5.3: Location of five sensor accelerome-
for their relevance to an independent study
ters. Each sensor is shown artificially colored and la-
on body segment accelerations (pelvis Pelv;
beled (LJMU naming convention). Inset, for the thigh
bilateral thigh x Th, and shank x Sh, LJMU and shank locations, the accelerometer was attached
naming convention) [Verheul et al., 2018] inside the four marker cluster.
(Figure 5.3).
5.3.2 Data preparation
Use of the existing data archive was permitted under UWA ethics exemption RA/4/1/8415 (training),
and the new data capture was carried out under LJMU ethics approval 17/SPS/043 (test). Data
processing was conducted with MATLAB R2017b (MathWorks, Natick, MA) and Python 2.7 (Python
Software Foundation, Beaverton, OR), both selected for availability of function libraries. In the case of
MATLAB, for access to the Biomechanical ToolKit 0.3 (Barre and Armand, 2014), and for Python, to
conduct low-level image processing using the OpenCV environment, and native HDF5 file handling
(opencv.org, hdfgroup.org). The operating system was Ubuntu v16.04 (Canonical, London, UK),
running on a desktop PC, Core i7 4GHz CPU, with 32GB RAM and NVIDIA multi-GPU configuration
(TITAN X & TITAN Xp; NVIDIA Corporation, Santa Clara, CA).
Thedatapreparationphasewasdesignedtomaximizetheintegrityofthesourcemarkertrajectories,
sensor accelerations, and force plate data ahead of model training and prediction. The intention was
to minimize capture errors (original and new), duplicate files, and select high-quality data rows with
labeled marker trajectories (training), sensor accelerations (test), and associated GRF/M. Each trial
was normalized to stance phase, and trimmed according to custom lead-in periods to best inform the
model as defined by earlier prototypes [Johnson et al., 2018a,b; Merriaux et al., 2017; Psycharakis and
82
|
UWA
|
5.3. METHODS
Figure 5.4: Visualization of NORM- (left), and PCA-aligned 3D accelerations (right),
sample sidestep right stance limb. Greater signal energy is evident in the stance limb sensors
R Th and R Sh. NORM-aligned accelerations sacrifice dimensionality information and hence the three
vectors are identical. PCA-aligned accelerations demonstrate a sweep of information towards Anterior
Acc (forward, red).
x
Miller, 2006].
Basic kinematic templates (based on movement at the sacrum) were used to identify running and
sidestepping/cutting in the training and test data (running >= 2.16 m/s [Segers et al., 2007]). The
sidestepping movement type in particular was selected for its relevance to sporting movements, and
knee injury risk, but also for its greater complexity compared with the literature. The majority of trials
exhibited right stance limb, with the movement towards the left (a small proportion of sidestepping
with crossover technique were removed). The running movement in the test data capture was also
sub-categorized into slow (2–3 m/s), moderate (4–5 m/s), and fast (> 6 m/s) trials.
Registration of a successful foot-strike (FS) onto the force plate, and subsequent toe-off (TO),
were both automatically detected using accepted vertical force and stance limb parameters [Milner
and Paquette, 2015; O’Connor et al., 2007; Tirosh and Sparrow, 2003], which were then translated
to the test accelerations by virtue of synchronized force plate and accelerometer telemetry. The lack
of a foot-mounted sensor meant the determination of FS from minimum vertical acceleration at this
location was unavailable [Bo¨tzel et al., 2016], and accelerations from the shank sensor were found to be
unreliable for this purpose. Identification of the TO gait event from IMU data was considered out of
scope for this study, being the primary research objective of other investigations [Allseits et al., 2017;
Bertoli et al., 2018; Falbriard et al., 2018].
The training-set of marker trajectories was converted into accelerations via double-differentiation.
83
|
UWA
|
5.3. METHODS
Since an accelerometer is a free body with an independent coordinate system [Lipton, 1967], in order to
model the relationship between 3D accelerations and GRF/M, the accelerations (both those synthesized
and recorded) were required to be aligned. Two mathematical methods for automatically re-orienting
the accelerations were tested and reported. The first, was to combine the three directional components
into one acceleration magnitude via Euclidean Norm (Figure 5.4, left). The second, employed Principal
Component Analysis (PCA) via Singular Value Decomposition (SVD), whereby a custom rotation
matrix was assembled with the ability to re-orient 3D accelerations in the direction of the greatest
energy (i.e. forward). For the training data, a one-off re-orientation was applied by calculating the
PCA rotation matrix according to the 3D acceleration at the sacrum location and applying this to
all five virtual accelerations. Only one rotation matrix was necessary for the simulated accelerations
because their source marker trajectories were aligned with the laboratory global coordinate system.
For the test-set of recorded sensor accelerations, these were all independent and hence an individual
rotation matrix was calculated and applied to each. With this test cohort, the effect of PCA can be
seen in the sweep of acceleration energy towards the forward (anteroposterior) direction (Figure 5.4,
right).
5.3.3 Data representation & model training
Model training and prediction was carried out us-
ing the Caffe deep learning framework [Jia et al.,
2014]. Fine-tuning CNN models allows for new in-
vestigations with smaller sample sizes to improve
their performance by leverage weighting relation-
ships built on earlier training at scale. In deep
learning terms, the number of training samples
in this study (minimum 1,176, maximum 5,378)
was small, and therefore the problem was a can-
didate for fine-tuning [Krizhevsky et al., 2012]. A
derivative of the 2012 IVSLRC (image-net.org)
challenge winner AlexNet called CaffeNet had
been selected as the strongest model in a similar
investigation, and the double-cascade approach
(CaffeNet through GRF/M to KJM) had also
Figure 5.5: Contact sheets of test accelera-
demonstrated a significant improvement in corre- tions flattened into 2D images. Sidestepping
lations of + 4.2 % [Johnson et al., 2018a, 2019]. movement combined left and right stance, 43 sam-
ples, NORM-aligned accelerations (left) loss of di-
For comparison, and to test a deeper more gen-
rectional information causes monochrome images,
eral model, this investigation also reports a sec-
PCA-aligned (right) retains color.
ondCNN,ResNet-50, the2015IVSLRCchallenge
winner [He et al., 2015].
Both AlexNet and ResNet-50 CNN are image classifiers which did not match the required four
dimensional input (3D accelerations plus time) and six vector GRF/M waveform output. In order
to fine-tune (double-cascade) from these CNN and leverage their existing training, the aligned 4D
84
|
UWA
|
5.4. RESULTS
acceleration inputs were flattened into 2D images by representing the five sensor locations on the
horizontal axis, stance-normalized time frames upwards on the vertical axis, and by use of the Python
SciPy imsave function to map the 3D accelerations onto the RGB colorspace [Du et al., 2015; Ke
et al., 2017] (Figure 5.5). Then, so that they would generate GRF/M waveforms (not simply label
classifications), the output layer of each CNN was modified from a SoftMax binary to a Euclidean loss
layer, which turned the CNN into a multivariate regression network. Most CNNs are classifiers which
means the number of features in their output layer is naturally small because it only contains weighting
predictions for a discrete set of labels. The high capture frequency of the force plate analog data now
being output by the modified network resulted in a non-standard CNN profile (output features >>
input features) which was addressed by reducing the number of output features via PCA [Johnson
et al., 2019].
The accuracy and validity of the approach was measured by comparing the correlation of values
predicted by the CNN models with the ground truth GRF/M over 100 % of time-normalized stance.
For further comparison, relative root mean squared error rRMSE was reported for individual use-cases
[Ren et al., 2008]. CNN model predictions were conducted using a single fold of each movement type
and stance limb iteration, including an overlaid combination which flipped the left stance limb onto
the right, to test the effectiveness of this data augmentation and whether the increase in training
samples improved performance. Using the simulated accelerations as the training sets, and the recorded
accelerations as the test sets generated variable ratios of training to test samples, however always in
favor of the training-set as per convention. For time brevity, single fold experiments were conducted,
earlier investigations having demonstrated similarity between single and k-fold analysis [Johnson et al.,
2019].
All CNN models and related digital material supporting this study have been made available
(digitalathlete.org).
5.4 Results
Compared with ground truth GRF/M, sets of correlations were compared for the two CNN models
CaffeNet (Table 5.1) and ResNet-50 (Table 5.2), both modes of acceleration re-orientation Euclidean
Norm (accNORM) and alignment by PCA rotation matrix (accPCA), for discrete GRF/M channels
F , F , F , M , M , M , and their overall means F and M . Experiments 1.1 and 2.1 list the
x y z x y z mean mean
correlations for the marker to GRF/M models used as seeds for the double-cascade, and are included
as reference information.
The strongest individual GRF channel correlation was considered first. Across the three GRF
channels F , F , F , the highest correlation was found for vertical F 0.9663 (rRMSE 13.92 %)
x y z z
using CaffeNet (accNORM, experiment 1.8) for moderate speed running off the left stance limb. By
channel, anterior F was predicted with a correlation up to 0.9579 (rRMSE 17.06 %), and lateral F
y x
0.8737 (rRMSE 21.56 %) both with ResNet-50 off the left stance limb, the former for slow running
(accPCA, experiment 2.21), the latter sidestepping (accNORM, experiment 2.24). Results are shown
bolded in their respective tables.
The mean of the three GRF, r(F ) achieved 0.8867 for CaffeNet (accNORM, experiment
mean
1.24), by comparison, ResNet-50 managed 0.8743 (accNORM, experiment 2.24), both for the same
85
|
UWA
|
5.5. DISCUSSION
Figure 5.6: Ground truth GRF versus predicted response. Test-set ground truth mean GRF
(blue, ticks), and predicted response (red), CaffeNet (left), ResNet-50 (right), both double-cascade,
interlaced output, correlations over 100 % stance phase, 25 samples. Cohort selected for strongest
r(F ) by CNN (sidestep off the left stance limb), min/max range and mean depicted.
mean
corresponding experiment with a sidestep off the left stance limb (Figure 5.6). The mean of the three
GRM, r(M ) proved less than satisfactory, CaffeNet making 0.6515 (accPCA, experiment 1.29),
mean
and ResNet-50 0.6486 (accPCA, experiment 2.29), again both for the same sidestep off the right stance
limb. Because of these poor correlations, GRM channels and r(M ) were not investigated further.
mean
5.5 Discussion
Convention dictates that research in the biomechanical sciences is strictly controlled by the primary
researcher. The use of broad data sets to train (or fine-tune) deep learning models already breaks this
paradigm, but this study went further by inviting a test-set of experiments conducted independently
at LJMU, where much of the study design and instrumentation was different to that used for the
historical UWA data capture used to train the CNN models. Performance under these conditions
would address the most common criticism that somehow the deep learning model had prior knowledge
of test samples (or home-game advantage).
As demonstrated by this study, the use of strategies to automatically re-orient 3D accelerations
freed the operator from the typical requirements of an initialization posture or sensor calibration.
Both the Euclidean Norm and PCA rotation matrix methods solve a major hurdle for adoption in the
field while being more elegant than previous solutions [Lebel et al., 2016; Lipton, 1967; Luinge and
Veltink, 2005; Picerno et al., 2011; Wouda et al., 2018; Zimmermann et al., 2018]. The only drawback
being the look-ahead processing requirement which makes either solution ‘near’real-time, but this is
outweighed by the advantages including being agnostic to the direction of participant travel. With
no clear separation of performance characteristics, the two re-orientation methods warrant further
investigation, particularly when mathematically the Euclidean Norm solution is more straightforward
86
|
UWA
|
5.5. DISCUSSION
to implement.
In the competition between the classic CaffeNet model [Krizhevsky et al., 2012] and the more
recent ResNet-50 [He et al., 2015], CaffeNet seemed to perform more strongly where there was greater
signal strength, e.g. F , r(F ), and ResNet-50 in conditions of greater noise, F , which reflects the
z mean x
suitability of the models to each particular use-case, due to either CNN architecture or initial model
training. It was theorized that coarse networks like CaffeNet will perform better than deeper networks
when the raw source has been blown up to meet the image input requirements, in this case five sensors
interpolated to 227 pixels.
The LJMU test running data capture was carried out at a number of different speeds and ac-
celeration/deceleration profiles. In experiments, these were initially grouped by stance limb, and
subsequently by a custom L & R combination overlay technique. Time-normalizing the input data
according to stance, was expected to reduce the effect of different running speeds, however, variance
remained in the results: CaffeNet being the strongest performer, accNORM, running subtypes r(F )
mean
0.7189 ± 0.0984 (accNORM, 1.2–1.11), r(F ) 0.7366 ± 0.0854 (accPCA, 1.12–1.21); and some
mean
of the highest correlations were seen with the samples of running at moderate speed, perhaps due to
conformity with the source UWA training data.
Mean GRF F for ResNet-50 combined stance limb variants outperformed the weakest single
mean
limb versions (e.g. experiments 2.30 vs 2.22 and 2.23). This is an important finding because a
stance-independent model would be far more applicable to game scenarios where the landing limb is
unpredictable, and would remove a layer of movement classification hierarchy from the system. The
strength of ResNet over CaffeNet in this use-case reflects the preference of deeper CNN architectures to
reward greater raw detail with higher learning capacity. This is because these more recent models retain
the original size and granularity of the input image through a much longer sequence of convolutions.
In other words, ResNet combined L & R models performed better than a rudimentary mean, and
highlights the generalization of the proposed method.
The major limitation of this study is the selection of sensor locations. Whereas the shank sensor
accelerations were able to successfully identify stance limb (Figure 5.4), the vertical acceleration profile
at the shank was found to be insufficient to identify the FS event. The lack of mediolateral acceleration
energy for running trials was cited for the low F and associated mean GRF correlations, due to
x
the CNN model being unable to distinguish signal from noise for these movements. This finding
demonstrated the importance of sensors being located as distal as possible in each plane from the
center of mass, in order to maximize acceleration profiles, moreover the improvement in correlation
performance for sidestepping illustrated the ability of CNN models to distinguish sensor locations by
establishing unique internal 3D acceleration signatures. This location awareness is despite a combined
acceleration lag and smoothing effect most notable in the response from FS [Pataky et al., 2019],
contributed to by the evolution of the workbench code-base from marker-based motion capture input,
which down-sampled input accelerations to 250 Hz, and the proprietary on-board telemetry filtering.
Overall, the performance of the deep learning workbench for GRF correlations was impressive when
compared with the literature (traditional linear and data science methods) against a hypothesis more
demanding than the unidirectional vGRF (F ), movement classification, or counting of steps most
z
commonly investigated [Ancillao et al., 2018; Bertuletti et al., 2018; Clermont et al., 2018; Cust et al.,
89
|
UWA
|
5.6. CONCLUSIONS
2018; Hu et al., 2018; Jacobs and Ferris, 2015; Jie-Han et al., 2018; Pham et al., 2018; Thiel et al.,
2018; Verheul et al., 2018; Wouda et al., 2018; Zimmermann et al., 2018]. The hypothesis of mean GRF
and GRM correlations > 0.80 was supported for sidestepping r(F ) regardless of re-orientation
mean
methods, CNN models, and stance limb, including the combined experiment 1.31 (CaffeNet, accNORM)
which achieved 0.8753. It was noted that the definition of LJMU sidestepping execution at 90° was
more aggressive than that of UWA at 45–60°, but that suspected homogeneity in FS pattern inherent
to sidestepping with respect to running outweighed any protocol disadvantage.
Thedeeplearningworkbenchemployedbythisstudyhasdemonstratedapplicabilitytobiomechanics
4D input and multivariate waveform output. The success of this approach was partly due to the custom
nature of the code development, rather than the use of off-the-shelf functions. Plus, these results would
not have been possible without headless background batch operation, and on-the-fly generation of CNN
architecture and hyperparameter optimization instructions (‘prototxt’ files) allowing for the drop-in of
different models as required.
Future investigations should focus on expanding the number of test participants. To improve
acceleration signature identification and subsequent model performance, it is strongly recommended
to include sensors located at C7 and on each foot. The addition of gyrometer and magnetometer
sensor telemetry is expected to increase correlations (the Noraxon sensors used in this study were
accelerometers only), but would require synthesizing or gathering such information for model training.
5.6 Conclusions
A biomechanically relevant system of on-field workload exposure monitoring and acute injury prediction
could be a revolutionary contribution to player game preparedness and career longevity. Through
a unique “deep learning workbench for biomechanics”, using legacy marker trajectory trials against
new (and independent) accelerometer-driven data capture, the results from this study improve on the
literature, but under more challenging sport-related tasks and systematic conditions that make it more
relevant for on-field use. Model performance was dependent on gross movement pattern (running or
sidestepping) which will be improved by more sophisticated type classification. Both CaffeNet and
ResNet-50 demonstrated the ability to profile sensor body location from acceleration signatures. Efforts
to address the limitations of sensor distal location, number of test participants and training samples,
and downstream smoothing effects are expected to strengthen the accuracy for all movement types
and GRF/M vectors and will open up this technology for practical application. These results would
not have been possible without the multidisciplinary collaboration between sport science and computer
science, but the dogma of the invested linear approach and perceived data ownership remain a barrier
to adoption. The harvesting of existing team IMU telemetry archives using a deep learning workbench
as presented here has the potential to trigger a revolution in the accuracy and validity of wearable
sensors from community fitness to professional sport.
5.7 Acknowledgements
This project was partially supported by the ARC Discovery Grant DP160101458 and an Australian
Government Research Training Program Scholarship. NVIDIA Corporation is gratefully acknowledged
90
|
UWA
|
6.2. INNOVATION
Figure 6.1: Deep learning workbench for biomechanics.
6.2 Innovation
The research project was designed to be one of the first to take a machine learning approach to
multidimensional biomechanical data, and for its practical application to sport-related movements.
The studies stand out for their use of multi-tester and inter-laboratory results, the number of trials
analyzed concurrently, and the period of time over which the data was originally collected. As each
of the studies one through four was presented, this novelty surfaced as incremental contributions
which came together as the “deep learning workbench for biomechanics”. Within this workbench, the
individual data science components had previously existed in the literature, but in this configuration
and application to sports biomechanics the approach was unique (Figure 6.1).
• We believe study one was the first to mine big data to predict 3D GRF/M of a complex sport-
related movement pattern from marker-based motion capture, and using a reduced marker
set.
• Facilitated by its under-the-hood approach to deep learning, studies two through four were able
to apply CNN fine-tuning techniques to biomechanical data by flattening spatio-temporal input
data into 2D images to suit the requirements of the selected pre-trained model. Once the 3D
trajectory data had been re-packaged as 2D images, the selection of a suitable network from
which to fine-tune could be made based on the performance and proximity to this data in terms
of images rather than a stream of temporal coordinates. This was a vital enabler for the research
because it allowed the accuracy and fidelity benefits of deep learning to be gained from thousands
instead of millions of training samples.
• Study three demonstrated the use of the double-cascade technique from earlier model weights,
which when applied to biomechanical data, can lead to statistically significant improvements in
results.
• Study four created simulated acceleration training data from marker trajectories, and applied
strategies to automatically re-orient 3D accelerations which freed the operator from the typical
requirements of an initialization posture or sensor calibration. Both these steps were necessary
for the research to claim translation applicability to on-field environments. This study also
demonstrated the ability of deep learning models to profile the location of wearable sensors based
on a 3D acceleration signature.
98
|
UWA
|
6.4. REFERENCES
islands of IMU telemetry retained by sensor companies, research institutions, clinicians, and teams
of all sporting codes. Training at a new magnitude of data samples, together with the integration
of a front-end movement classifier (rather than the crude kinematic version employed by the study
prototypes), is expected to improve the accuracy, robustness, and applicability of wearable sensor
IMU-driven models. Future investigations to the data science aspects of the workbench include
consideration of 2D video input; advances in models from which to fine-tune; the configuration of
the flattening to 2D image inputs; and new deep learning frameworks (e.g. TensorFlow). The most
significant opportunities for improvement in terms of biomechanics are to reconfirm the number and
optimal locations on the body for sensor placement, and for independent gait event determination.
This research has demonstrated an end-to-end deep learning workbench for human movement,
one which opens the door for sports and clinical researchers to extract value from existing captive
data, and which has the potential to convert other spatio-temporal use-cases into reality. Driven from
a continuous stream of real-time kinematics provided by sensors and/or computer vision, new deep
learning models may revolutionize the monitoring of human movement and facilitate accurate and
multidimensional biomechanical analyses outside the laboratory.
6.4 References
B. D. Boudreaux, E. P. Hebert, D. B. Hollander, B. M. Williams, C. L. Cormier, M. R. Naquin, W. W.
Gillan, E. E. Gusew, and R. R. Kraemer. Validity of wearable activity monitors during cycling
and resistance exercise. Medicine and Science in Sports and Exercise, 50(3):624–633, 2018. ISSN
0195-9131.
P. S. Bradley and J. D. Ade. Are current physical match performance metrics in elite soccer fit for
purpose or is the adoption of an integrated approach needed? International Journal of Sports
Physiology and Performance, 13(5):656–664, 2018. ISSN 1555-0265.
T. W. Calvert, E. W. Banister, M. V. Savage, and T. Bach. A systems model of the effects of training
onphysicalperformance. IEEE Transactions on Systems, Man, and Cybernetics, SMC-6(2):94–102,
1976. ISSN 0018-9472.
A. Coutts, P. Reaburn, T. Piva, and A. Murphy. Changes in selected biochemical, muscular strength,
power, and endurance measures during deliberate overreaching and tapering in rugby league
players. International Journal of Sports Medicine, 28(02):116–124, 2007.
T. J. Gabbett. Debunking the myths about training load, injury and performance: Empirical evidence,
hot topics and recommendations for practitioners. British Journal of Sports Medicine, online:1–9,
2018.
E. S. Matijevich, L. M. Branscombe, L. R. Scott, and K. E. Zelik. Ground reaction force metrics are
not strongly correlated with tibial bone load when running across speeds and slopes: Implications
for science, sport and wearable tech. PloS one, 14(1):1–19, 2019. ISSN 1932-6203.
100
|
UWA
|
Prediction of ground reaction forces
and moments via supervised learning is
independent of participant sex, height
and mass
B.1 Abstract
Accurate multidimensional ground reaction forces and moments (GRF/Ms) can be predicted from
marker-based motion capture using Partial Least Squares (PLS) supervised learning. In this study,
the correlations between known and predicted GRF/Ms are compared depending on whether the PLS
model is trained using the discrete inputs of sex, height and mass. All three variables were found to be
accounted for in the marker trajectory data, which serves to simplify data capture requirements and
importantly, indicates that prediction of GRF/Ms can be achieved without pre-existing knowledge
of such participant specific inputs. This multidisciplinary research approach significantly advances
machine representation of real world physical attributes with direct application to sports biomechanics.
Keywords: Big data · Motion capture · Computer vision · Sports analytics.
B.2 Introduction
One of the ongoing challenges in sports biomechanics is that data accuracy necessary for the estimation
of internal and external musculoskeletal loads, and subsequent injury risk, requires dual data capture
of marker-based motion and embedded force plate derived GRF/Ms in controlled research laboratory
conditions[ElliottandAlderson,2007]. Somestudieshaveattemptedtobringthefieldtothelaboratory
by mounting turf on the surface of the force plate [Jones et al., 2009; Mu¨ller et al., 2010] while others
have adopted the reverse approach of taking measurement devices to the field, either by embedding
force plates into the playing surface [Yanai et al., 2017], or more commonly using in-shoe pressure
sensors[Liuetal.,2010;Simetal.,2015]. However, noneoftheseapproachesaresuccessfulinproducing
accurate GRF/Ms over three orthogonal axes without impacting athlete performance. Efforts to predict
GRF/Ms using non-invasive computer vision techniques show promise but either lack validation to a
gold standard [Soo Park and Shi, 2016; Wei and Chai, 2010] or relevance to sporting tasks [Chen et al.,
2014].
109
|
UWA
|
B.3. METHODS
Figure B.1: Overall study design.
The aim of this study is to test the accuracy of a PLS prediction model with and without the
discrete input variables of sex, height and mass which are often required in traditional biomechanical
data collection pipelines. The first investigation was that removal of sex, and second, that removal
of mass and height, would have negligible effects on predicted versus known GRF/Ms correlation
coefficients for running and sidestepping trials. We hypothesize that all three discrete variables are
inherently accounted for in the marker trajectory data.
B.3 Methods
Mining of archive data was carried out under The University of Western Australia (UWA) ethics
exemption RA/4/1/8415. The capture sessions were carried out at one of the university’s three
biomechanics laboratories over a 13–year period from 2004–2017 (the design of the study is shown in
Figure B.1), with participants drawn from a healthy population, male 69.1%, female 30.9 %, height
1,741 ± 102 mm and mass 69.75 ± 11.47 kg. Given the customised UWA marker set has evolved in
this period, the following subset of markers was selected to maximise trial inclusion: C7, sacrum; and
hallux, calcaneus and lateral ankle malleolus of each foot [Besier et al., 2003].
The laboratory motion capture equipment has varied during this time from 12–20 Vicon (Oxford
Metrics, Oxford, UK) near-infrared cameras of model types MCam2, MX13 and T40S. An AMTI force
plate (Advanced Mechanical Technology Inc., Watertown, MA, USA) 1,200 × 1,200 mm installed
flush with the floor measured the six GRF/Ms. Equipment setup and calibration was conducted to
manufacturer specifications using Vicon proprietary software (Workstation v5.2.4 to Nexus v2.2.3),
with data stored in the industry standard ‘coordinate 3D’ c3d file format (Motion Lab Systems, Baton
Rouge, LA).
Several pre-processing steps were applied to maximise the integrity of data before training the
PLS model. The foot-strike event was automatically determined by detecting vertical force greater
than a threshold (20 N) over a defined period (0.025 s) along with the vertical and lateral velocities
(0.02 m/s and 0.15 m/s) of the dominant foot calcaneus marker [Milner and Paquette, 2015]. The
lead-in period before foot-strike was deemed more important for the predictor variable (kinematic
marker trajectories), and therefore contiguous marker data was trimmed around the foot-strike event
from −0.20 to +0.30s (125 frames), and force plate data from −0.05 to +0.30s (700 frames). Analog
force plate data sampled at frequencies lower than 2,000 Hz and motion capture lower than 250 Hz
were time normalized using piecewise cubic spline interpolation. Sex, height and mass were obtained
110
|
UWA
|
B.3. METHODS
Figure B.2: Comparison of mean prediction GRF/Ms correlations.
from the associated mp file (mp is a proprietary extensible mark-up language XML file format used
by Vicon that stores participant specific session and anthropometric data). Children were excluded
by rejecting trials where the participant height was less than 1,500 mm, this being two standard
deviations below the average Australian adult female height 1,644 ± 72 mm [Ward, 2011]. Trials
with duplicate marker trajectories were rejected, with no regard whether the data was filtered, the
sidestep planned or unplanned, footing crossover or regular, or foot-strike technique. Trials where the
participant movement or the start/end GRF/Ms were unexpected were also removed. Data analysis
was conducted using MATLAB R2016b (MathWorks, Natick, MA) with the Biomechanical ToolKit
v0.3 [Barre and Armand, 2014] under Ubuntu v14.04 (Canonical, London, UK) running on a desktop
PC, Core i7 4GHz CPU, with 32GB RAM.
Marker trajectories for three movements types were selected as follows: ‘run’ (761 trials), ‘sidestep
bilateral’ (1,494 trials), and ‘sidestep left’ (1,277 trials). The combination bilateral sidestep group
was created by flipping ‘sidestep right’ trajectories laterally about the global origin and adding to the
sidestep to the left (off the right foot) cohort. If not a sidestep, movement type ‘run’was classified
by forward motion of at least 2.0 m/s. To test the hypotheses, three use-cases were investigated: (1)
variables sex, height and mass included; (2) without sex; and (3) without height and mass. To avoid
overfitting, 10-fold cross-validation was used on the each of the movement type data sets. Trained
using the 80 % training set, then presented with the 20 % test set, a subclass of PLS called Sparse
SIMPLS running in R [Chun and Kele¸s, 2010; R Core Team, 2016] was used to predict GRF/Ms from
marker trajectories which were then compared with the known force plate values. The mean correlation
coefficient was determined for each of the six GRF/Ms and over the ten folds of data. The mean
prediction correlation of the three GRF/Ms provided a measure of model performance in two numbers
r(F ) and r(M ).
mean mean
111
|
UWA
|
B.4. RESULTS & DISCUSSION
Figure B.3: Maximum prediction GRFs correlations ‘sidestep left’ (known blue ticks,
predicted red).
B.4 Results & discussion
Systematic removal of the discrete variables of sex, then height and mass, from the input to the PLS
model had a negligible effect on the prediction of GRF/Ms, and the results for the three movement
types of ‘run’, ‘sidestep bilateral’ and ‘sidestep left’ are shown in Figure B.2. The most accurate
prediction pair of r(F ) 0.9803 and r(M ) 0.9004 were recorded for movement ‘sidestep left’.
mean mean
The ‘run’ movement type reported the weakest prediction r(F ) 0.9152 and r(M ) 0.7568 from
mean mean
the smallest sample size (761 trials). The overall high correlations illustrated the suitability of PLS
for this type of regression (temporal profile, number of predictor and output features, and number of
samples).
These results did not improve on earlier testing with a smaller sample r(Fmean) 0.9804 and
r(M ) 0.9143 (movement ‘sidestep left’, 441 trials) which suggests PLS performance is dependent
mean
on more than sample size. However, mean results were higher than the nearest comparison in the
literature of Oh et al. [2013], who report maximum r(F ) 0.9647 and r(M ) 0.8987 values for
max max
walking.
Theproximityofthecorrelationswithandwithoutsex, heightandmassindicatedallthreevariables
are fully contained in the marker trajectories. Therefore, the hypothesis that all three discrete variables
are inherently accounted for in the marker trajectory data was proven. To illustrate the prediction
of the PLS model, the individual sample of movement type ‘sidestep left’(data fold one) with the
maximum r(F ) 0.9994 is shown in Figure B.3.
mean
B.5 Conclusion
This study shows that the prediction of GRF/Ms from motion data using PLS supervised learning
can be achieved without prior knowledge of participant sex, height and mass. The predicted mean
correlations of GRF/Ms reported are higher than maxima in the literature and obtained using fewer
markers, however the results illustrate both the suitability and some of the limitations of approaches
employing PLS. Lessons learned in the data mining and pre-processing can now be applied to more
advanced regression techniques. This study is the first practical application of predicting GRF/Ms
from marker trajectories of complex sporting movements without the use of a force plate. The discovery
that discrete inputs of sex, height and mass are not required is a major simplification of data capture
and contributes to the goal of real-time accurate GRF/Ms outside of laboratory settings. Paired with
less invasive methods of motion capture (computer vision or inertial sensors), the overarching goal
of this project to achieve accurate real-time prediction of GRF/Ms in the field is within reach. The
independence of the model to the participants’ sex (female or male) may also have implications for
112
|
UWA
|
B.6. ACKNOWLEDGEMENTS
female athlete training and injury prevention. Large-scale mining of archive biomechanics data is novel,
and this study illustrates the practical outcomes which can be achieved from such a big data approach.
B.6 Acknowledgements
This project is partially supported by the NVIDIA GPU Hardware Grant Program, by ARC Grant
DP160101458 and an Australian Government Research Training Program Scholarship.
B.7 References
A. Barre and S. Armand. Biomechanical ToolKit: Open-source framework to visualize and process
biomechanical data. Computer Methods and Programs in Biomedicine, 114(1):80–87, 2014. ISSN
0169-2607.
T. F. Besier, D. L. Sturnieks, J. A. Alderson, and D. G. Lloyd. Repeatability of gait data using a
functional hip joint centre and a mean helical knee axis. Journal of Biomechanics, 36(8):1159–1168,
2003. ISSN 0021-9290.
N. Chen, S. Urban, C. Osendorfer, J. Bayer, and P. Van Der Smagt. Estimating finger grip force from
an image of the hand using convolutional neural networks and gaussian processes. In 2014 IEEE
International Conference on Robotics and Automation (ICRA), pages 3137–3142. IEEE, 2014.
ISBN 1479936855.
H. Chun and S. Kele¸s. Sparse partial least squares regression for simultaneous dimension reduction
and variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology),
72(1):3–25, 2010. ISSN 1467-9868.
B. Elliott and J. Alderson. Laboratory versus field testing in cricket bowling: A review of current and
past practice in modelling techniques. Sports Biomechanics, 6(1):99–108, 2007. ISSN 1476-3141.
P. L. Jones, D. G. Kerwin, G. Irwin, and L. D. Nokes. Three dimensional analysis of knee biomechanics
when landing on natural turf and football turf. Journal of Medical and Biological Engineering, 29
(4):184–188, 2009. ISSN 1609-0985.
T. Liu, Y. Inoue, and K. Shibata. A wearable ground reaction force sensor system and its application
to the measurement of extrinsic gait variability. Sensors, 10(11):10240–10255, 2010.
C. E. Milner and M. R. Paquette. A kinematic method to detect foot contact during running for all
foot strike patterns. Journal of Biomechanics, 48(12):3502–3505, 2015. ISSN 0021-9290.
C. Mu¨ller, T. Sterzing, J. Lange, and T. L. Milani. Comprehensive evaluation of player-surface
interaction on artificial soccer turf. Sports Biomechanics, 9(3):193–205, 2010. ISSN 1476-3141.
S. E. Oh, A. Choi, and J. H. Mun. Prediction of ground reaction forces during gait based on kinematics
and a neural network model. Journal of Biomechanics, 46(14):2372–2380, 2013. ISSN 0021-9290.
113
|
UWA
|
B.7. REFERENCES
R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical
Computing, Vienna, Austria, 2016. URL www.R-project.org.
T. Sim, H. Kwon, S. E. Oh, S.-B. Joo, A. Choi, H. M. Heo, K. Kim, and J. H. Mun. Predicting
complete ground reaction forces and moments during gait with insole plantar pressure information
using a wavelet neural network. Journal of Biomechanical Engineering, 137(9):091001:1–9, 2015.
ISSN 0148-0731.
H. Soo Park and J. Shi. Force from motion: Decoding physical sensation in a first person video.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
3834–3842, 2016.
S. Ward. Anthropometric data and australian populations – do they fit? In HFESA 47th Annual
Conference 2011, 2011.
X. Wei and J. Chai. Videomocap: Modeling physically realistic human motion from monocular video
sequences. In ACM Transactions on Graphics (TOG), volume 29, page 42. ACM, 2010. ISBN
1450302106.
T. Yanai, A. Matsuo, A. Maeda, H. Nakamoto, M. Mizutani, H. Kanehisa, and T. Fukunaga. Reliability
and validity of kinetic and kinematic parameters determined with force plates embedded under
soil-filled baseball mound. Journal of Applied Biomechanics, 0(0):1–18, 2017. ISSN 1065-8483.
114
|
UWA
|
Multidimensional ground reaction
forces and moments from wearable
sensor accelerations via deep learning
C.1 Introduction
It is currently not possible to record accurate 3D ground reaction forces and moments (GRF/Ms) in the
field of play [Boudreaux et al., 2017; Karatsidis et al., 2016]. By harvesting archive laboratory-based
motion capture and force plate data, this proof of concept investigated using cutting-edge machine
learning to predict GRF/Ms from just three wearable sensors. To assess the potential of this approach,
the study aimed to achieve average correlations rGRF and rGRM greater than 0.80.
mean mean
C.2 Methods
The CaffeNet deep learning convolutional neural network (CNN) trained on 150,000 ImageNet images
[Krizhevsky et al., 2012] was used first to fine-tune a multivariate regression model between marker-
based laboratory motion capture of 2,355 sidestepping left trials using eight retro-reflective passive
markers (Vicon, Oxford, UK) and associated force plate recorded GRF/Ms (AMTI, Watertown, MA,
USA).
From this CNN model, a subsequent transfer learning technique (double-cascade) was applied to
associateaccelerationmagnitudestogroundtruthGRF/Msforthesamesidestep,withaccelerationdata
either synthesized or recorded at three locations: upper back, sacrum, and lateral shank. The model
was trained using accelerations synthesized from 2,548 trials of marker data via double-differentiation
of displacements, and predictions made with five trials of accelerations recorded by Xsens MTw inertial
sensors (Xsens, Enschede, The Netherlands). The acceleration magnitude (Euclidean norm) was used
to avoid the difference in coordinate systems (global versus sensor-independent) between the synthetic
and recorded accelerometer data.
C.3 Results
The correlations between GRF/Ms recorded by the force plate, and those predicted by the double-
cascade model were rGRF 0.86 and rGRM 0.81 (Figure C.1. By comparison, Karatsidis et al.
mean mean
[2016] reported rGRF 0.94 and rGRM 0.82 (11 trials, 17 sensors, full-body suit, walking gait)
mean mean
116
|
UWA
|
C.4. DISCUSSION
using a linear approach. Whereas the CNN achieved stronger correlations for the horizontal forces F
x
and F (and corresponding moments), Karatsidis and colleagues were more accurate in F and F .
y y z
C.4 Discussion
The study’s goal of attaining average correlations greater than 0.80 was achieved, demonstrating the
potentialfordeeplearningtomineexistinghigh-fidelitylaboratorydatacapturetoincreasetheaccuracy
and validity of unidimensional wearable sensor outputs. Within the limitations of correlation analysis,
the strong results from acceleration magnitudes encourage subsequent investigation using aligned
directional components. Improving the correspondence of training markers to test accelerometers, and
using more recent sensor technology is expected to produce further improvements. These results herald
on-field practical application of wearable sensor technologies for laboratory-fidelity biomechanical
analyses.
C.5 Acknowledgements
NVIDIA Corporation GPU Grant Program; ARC DP160101458; and the Australian Government
Research Training Program.
C.6 References
B. D. Boudreaux, E. P. Hebert, D. B. Hollander, B. M. Williams, C. L. Cormier, M. R. Naquin, W. W.
Gillan, E. E. Gusew, and R. R. Kraemer. Validity of wearable activity monitors during cycling
and resistance exercise. Medicine and Science in Sports and Exercise, 2017. ISSN 1530-0315.
A. Karatsidis, G. Bellusci, H. M. Schepers, M. de Zee, M. S. Andersen, and P. H. Veltink. Estimation
of ground reaction forces and moments during gait using only inertial motion capture. Sensors, 17
(1):1–22, 2016.
A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural
networks. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
118
|
UWA
|
Multidimensional ground reaction
forces predicted from a single
sacrum-mounted accelerometer via
deep learning
E.1 Summary
There is a gap between the understanding of the mechanisms of sporting injury and the ability to
monitor these risk parameters during a game. A single-sensor wearable solution which could be used
in lieu of captive laboratory biomechanical instrumentation would be a breakthrough for accurate
and valid on-field monitoring of athlete performance and safety. Using archive motion capture and
force plate data from The University of Western Australia, with new accelerometer trials conducted
at Liverpool John Moores University, this proof of concept exploited deep learning techniques to
investigate prediction of 3D ground reaction forces (GRF) from a single sacrum-mounted accelerometer.
E.2 Introduction
It is currently not possible to record accurate 3D GRF from a single wearable sensor in the field of play
[Callaghan et al., 2018; Nedergaard et al., 2018]. This study aimed to establish if accurate 3D GRF
prediction with mean correlations above 0.85 from running and sidestepping trials was achievable.
E.3 Methods
The CaffeNet deep learning convolutional neural network model (CNN) pre-trained on ImageNet big
data [Krizhevsky et al., 2012] was used first to fine-tune a multivariate regression model between
marker-based laboratory motion capture (Vicon, Oxford, UK) and associated force plate recorded
GRF (AMTI, Watertown, MA) from archive left sidestepping (right-stance) trials [Johnson et al.,
2019]. From this strongest seed model, a subsequent double-cascade technique was used to transfer 3D
accelerations to ground truth GRF. This model training was undertaken using accelerations simulated
viadouble-differentiationofmarkertrajectories, andGRF(Vicon/AMTI),andtestedwithaccelerations
recorded by a single Noraxon DTS-3D 518 accelerometer (Noraxon, Scottsdale, AZ) located at the
122
|
UWA
|
F.5. DATA REPOSITORY
To alleviate the hardware risk of GPU overheating, cus-
tom Python gostat.py was used to monitor on-board tem-
perature and initiate email and text message alerts for
> 90°C (absolute) and > 30°C (predefined delta limit,
figure F.2).
F.5 Data repository
Training deep learning models from scratch requires
a vast number (millions) of data samples [Krizhevsky
et al., 2012], and even fine-tuning from an existing model
(cs231n.github.io and adilmoujahid.com) calls for more
movementtrials(thousands)thanarecapturedfromindivid-
Figure F.2: iOS notification warn-
ual biomechanics studies. Therefore the first (and ongoing)
ings. Alerts generated by the rise (below)
priority for the project was to gather as much movement
andfall(above)oftheGPU1temperature
dataaspossible,andtothisendrawfile-systemswerecopied (indicatedbyasterisk)throughpredefined
from a number of sources (Table F.1): laboratory personal delta limits.
computers (pc); other school file-shares (irds); hard-drives (internal and external) and CD-ROMs
stored in a fire-safe in the UWA Sports Biomechanics Laboratory (hddsafe, cdrsafe); and external
upload by school alumni via Google Drive (gdr). Of the twelve hard-drives retrieved from the fire-safe,
five (42 %) suffered fatal hardware failures rendering their data partially or completely unusable. Data
engineering (collecting, labeling, cleaning, and organizing) is often the forgotten side of data science
but cited to take 53 % of time and effort [CrowdFlower, 2017], a proportion reflected by this project.
As at August 2018, the ongoing result was a total of 458,372 c3d files gathered from data capture
sessions across multiple laboratories over a 17–year period from 2001–2017, and believed to be one of
the largest single repositories of sports biomechanics movement data in the world.
Initially at least this repository was expected to grow at a rate of 2–5 TiB/month, and storage in the
cloud made sense primarily for reasons of cost and data integrity. Although a locally-attached storage
array or Network Attached Storage (NAS) device would have provided faster storage bandwidth, the
cost and physical security requirements were prohibitive. IRDS (library.uwa.edu.au) was selected as
the primary cloud storage facility because it is provided to UWA research students free of charge, and
beingaUWAresourceitiscontractedtomeetuniversityethicsrequirements(UWAManagingResearch
Data research.uwa.edu.au, UWA Research Data Management Plan guides.library.uwa.edu.au).
However, relying on IRDS had drawbacks, not least because it was contracted to be the lowest-tier
of cloud storage provided by Vocus and was missing features much of the industry would today
consider mandatory (vocus.com.au, Table F.2). IRDS was accessed from Ubuntu via persistent
file-system mount points using the Common Internet File-System protocol (CIFS, cifs.com). IRDS
CIFS presentation is only available on the UWA cabled network, including Virtual Private Network
(VPN), but not on UWA wireless networks. IRDS is also presented via the Web Distributed Authoring
and Versioning protocol (WebDav, tools.ietf.org/html/rfc4918) for access via Microsoft Windows
File Explorer or Apple macOS Finder. WebDav authentication is only available with a UWA staff (not
student) account, and the connection was found to be unreliable with at least 50 % of window refresh
128
|
UWA
|
F.5. DATA REPOSITORY
Table F.1: Data repository sources1.
Foldername(data−set/idxsourcedetaildatec3dfiles) c3dfiles Files Size Notes
001pccm48015111024519 24519 157141 292GiB
002pccm48216011157 57 1627 2.6GiB
003pccm4201511180 0 72665 77GiB
004irdsseanbyrne16043020066 20066 144547 410GiB
005pcG5NLNW1viconoutdoor 160518biom2634 2634 134929 2.4TiB
006hddsafeWCAS83354103dexa160523part297 297 0 0 HDDcrashed,failedchkdsk
007pcC6F7T92scissehwd043160525biom25 25 724 1.8GiB
008hddsafeWMAEP1897822mri160526part2277 2277 127313 141GiB HDDcrashed,failedtoregisterasdrive
009hddsafeWMAM9AZ77728tidman1605260 0 3757 3.6GiB
010cdromkanesoriginalviconswimming 160530172 172 1394 4.5GiB
011pc17F6GY1scissehwd008gaitpmh160526250 250 6517 1.2TiB
012pcJ5RV32Sbiomvicongait1606073045 3045 63386 208GiB
013hddsafeWMAEH1356916biomarchive16060915944 15944 123488 311GiB
014hddsafeWMAEH1332502biomarchive160610123 123 29892 190GiB
015hddsafeWMAEH1402175biomarchive200816082474672 74672 331556 459GiB
016hddsafeKRVN03ZAGZT5GDvicon200916082529179 29179 231428 555GiB
017hddsafe408029498biomarchivebefore2008pt216082511245 11245 88383 219GiB
018hddsafe408029497biomarchivebefore2008pt11608250 0 0 0 Failedtoregisterasdrive
019hddsafeblueeye0650567831608260 0 0 0 Failedtoregisterasdrive
020cdrsafebox1611150 0 10187 18GiB
021hddsafeb61eq2shmaxtorbmresearchpmills16111531018 31018 298522 498GiB
022hddsafe3JS04SQ9external1611180 0 0 0 Failedtoregisterasdrive
023hddjonathanstaynor 16120713287 13287 224665 456GiB
024cdrsafestack 1612140 0 17 1.5GiB
025hdddanielcottam1702032890 2890 44614 1.7TiB
026hddshinalee1702052565 2565 46752 343GiB
027gdr sarahstearne1702074755 4755 45307 143GiB
028hddjacquelinealderson17022636689 36689 386414 654GiB
029gdr kanemiddleton1702203240 3240 26546 68GiB
030hddamitycampbell17022849290 49290 383096 2.2TiB
031hddmarcuslee1703015098 5098 165101 229GiB
032hddstacyfoo170301348 348 11318 267GiB
033hdddennywells17031415682 15682 159036 1.4TiB
034gdr waynespratford1703151126 1126 8488 30GiB
035hdddanielcottamoverlaydatacapture1607061703163 3 483 186GiB
036irdsussdsbp17040366831 66831 610073 1.4TiB
037hddjonathanstaynorgillianweir0941151704036148 6148 52777 171GiB
038gdr alasdairdempsey 1705305061 5061 121225 187GiB
039www publicdataset170601114 114 345 1.3GiB demotu.org/datasets/running
040gdr gillianweir 1707103540 3540 47004 220GiB
041gdr trentonwarburton170720470 470 10654 23GiB
042gdr sarahstearne170207b526 526 4369 13GiB
043hddsparc1709012657 2657 45234 150GiB
044new curtinxsens17120410 10 124 355MiB
045gdr waynespratford170808567 567 3413 6.4GiB
046irdsbbud18030221952 21952 294438 4.0TiB
1AsatJune24,2018. IRDSfileshare//drive.irds.uwa.edu.au/SSEH-MCR-002raw28TiB,91%used,458,372c3dfiles.
Table F.2: Cloud storage research feature requirements.
Feature IRDSsupport Detail
CIFSprotocol Partial OnlyonUWAcablednetworkorVPN(notwireless)
WebDavprotocol Partial Slow,unreliable,onlywithUWAstaffaccount
Backup&restore Yes Filesautomaticallybackedup
Accesscontrol Yes Administratoranduser-levelauthority
Auditing No Logwhomadethechangeandwhen
Versioning No Userabilitytorollbackchanges
Externalaccess No OnlyaccessibleviaUWAuseraccounts(oncampusorviaVPN)
Bulkupload No e.g. AmazonSnowmobile(aws.amazon.com/snowmobile)
129
|
UWA
|
F.6. APPLICATION DEVELOPMENT ARCHITECTURE
Table F.3: Throughput comparison of UWA cloud storage providers1.
Cloudstorage UploadMb/sec DownloadMb/sec
IRDS 11.03 11.03
GoogleDrive 0.79 4.01
CloudStor 197.40 9.65
1Mean of three clean transfers of 127 MiB file, conducted June 24, 2018.
operations requiring a client machine restart. In semester one, 2017, IRDS suffered two unscheduled
downtime events, reached capacity (campus-wide), and the project encountered file-system corruption
(client machines unable to delete files, UWA service ticket INC1000653).
UWA network architecture positions IRDS within its perimeter, and therefore the resource is
only accessible to students and staff with UWA credentials. While it is possible to assign external
collaborators with UWA non-employee user accounts the administration involved is extensive. In
recognition of this limitation, UWA Ethics (Mark Dixon, November 22, 2016) granted the project
permission to use Google Drive as an intermediary for external parties. Google imposes a 15 GB
capacity limit for unregistered accounts which also made it difficult to use for this project because
most file transfers were larger than this (support.google.com/drive/answer/2375123).
CloudStor from aarnet (aarnet.edu.au) was a potential replacement candidate for IRDS, which
also met university ethics requirements, although only the first 100 GB of storage was free. Accessed via
davfs file-system mount points (savannah.nongnu.org/projects/davfs2), attempts to use CloudStor
during 2017 seemed throttled by UWA network Quality of Service (QoS) rules and a service ticket
opened in April 2017 (clous-1704) received no reply. By June 2018 these restrictions appeared to have
been resolved (Table F.3).
The data repository on IRDS contained all manner of file types, therefore in order to speed up code
execution, only the necessary movement files (and folder structure) were periodically copied to a local
cache file-system (a 6 TiB logical volume on the Ubuntu Application Server \local spanning the three
Western Digital 2 TiB hard-drives) by the custom script goirds.sh. To save time, processes to copy
files into the repository were executed in parallel on both the Application and Software Distribution
servers, often making use of the virtual terminal facility screen. Regardless, if this data repository
is to be maintained beyond 2017–2018 it is the recommendation of the project that it be moved to
another cloud storage provider.
F.6 Application development architecture
Development of a project of this scale and complexity required consideration and global design of
the application architecture (version control, code archive, error handling and alerts) else it risked
becoming a run-once collection of scripts. A formal approach to code integrity also guaranteed the
ability to roll-back to a point in time, useful to reproduce published results for example. Because of the
proximity to operating system commands, these operations were generally achieved via bash scripts,
except in the case of the email/text interface which for speed and flexibility (and practise) was written
in Python.
130
|
UWA
|
F.6. APPLICATION DEVELOPMENT ARCHITECTURE
F.6.1 Version control
With one client laptop, two servers, and two cloud storage resources (IRDS and Google Drive) it was
imperative to manage code version control. The client laptop was deemed the master for all code
source. Execution was run headless using secure shell ssh to the Application Server via a bash wrapper
gossh.sh running on the client. If the version of any project module on the client was found to be
more recent than that on any of the five file-systems, then the newer version was copied via rsync.
The Raspberry Pi single-board computer was nominated as the Software Distribution Server for two
reasons, first since it was used by gossh.sh as the mount point for the IRDS file-system, and second a
secure shell connection to it automatically prompted the user to scan for version control changes.
14-Aug-2018 14:39:49 Scan for and install application updates? [y/N/q]
A sweep for changes could also be initiated atany time byusing the update parameter on the gossh.sh
command-line.
F.6.2 Code archive
Periodically, and at the end of a major update (e.g. a conference abstract or paper release), the custom
script goark.sh was used to take a snapshot of all code at the specific point in time. This included
all code output, and cleaning up of trash folders. The version number in the comment header of all
modules was updated automatically using the Unix stream editor sed.
#----- Build Author Date Change
#----- a36 wrj 27jun2018 alpha release
F.6.3 Error handling & alerts
Execution of machine learning code often takes from hours to days (for k-fold analysis for example)
and therefore headless operation with batch background jobs initiated remotely over ssh was used,
removing the need to maintain active desktop sessions and thus removing a major risk of failure. This
batch operation required a consistent approach to error handling and reporting format, ideally with
email and text message alerts. Handshake protocols were used between different environments, with
positive exchange of tokens via text files, e.g. between MATLAB and Python.
15-Jun-2018 14:06:47 Start external command ‘/usr/bin/python
"/irds/sseh-mcr-002/
MATLAB/caffehdf5.py" "/local/tmp.mcrnet/caffe training set inputX U"
"/local/
tmp.mcrnet/caffe training set outputy U" "/local/tmp.mcrnet/caffe training set"
True 227 125 3 True’
15-Jun-2018 14:06:47 Handshake ON
15-Jun-2018 14:06:49 Start caffehdf5.py
15-Jun-2018 14:06:55 End caffehdf5.py
131
|
UWA
|
F.6. APPLICATION DEVELOPMENT ARCHITECTURE
Figure F.4: iOS text message alert. Longer
Figure F.3: iOS Gmail message. To alert to
the completion of one training run; the learning
batchjobslikegoark.shbenefittedfromstart/end
text messages.
curve and a bar-chart of training:test proportions
are included as attachments.
15-Jun-2018 14:06:55 End external command (elapsed 00:00:07)
15-Jun-2018 14:06:55 Resume MATLAB, handshake ‘PASS’
Where possible in the code language and offered by the interface, unexpected and critical messages
(prefixed ‘WARNING’ and ‘ERROR’ respectively) were piped to stderr to distinguish them from
stdout. The error handling subsystem was also used for model training progress reporting. This
approachofheadlessbatchexecution,andemail/txtreportingwasthesingle-mostimportantapplication
architecture factor in the success of the project.
Custom Python rmailer.py was used as the single interface between the project and ssmtp for
email exchange (Figure F.3). A commercial email-to-text gateway was used to send critical text
messages (Figure F.4).
F.6.4 Code modules
Source code was organized into folders with similar modules grouped together (Figure F.5). Scripts
and programs called from the command-line were named with the prefix ‘go’. The project consisted of
fifty modules with a total of 13,267 lines of code (Table F.4).
• Top level, major code blocks;
• / bin, system-wide bash and startup scripts;
132
|
Virginia Tech
|
An Improved Model for Prediction of PM from Surface Mining Operations
10
William Randolph Reed
(ABSTRACT)
Air quality permits are required for the construction of all new surface mining operations. An air
quality permit requires a surface mining operation to estimate the type and amount of pollutants
the facility will produce. During surface mining the most common pollutant is particulate matter
having an aerodynamic diameter less than 10 microns (PM ).
10
The Industrial Source Complex (ISC3) model, created by the United States Environmental
Protection Agency (U.S. EPA), is a model used for predicting dispersion of pollutants from
industrial facilities, including surface mines and quarries. The use of this model is required
when applying for a surface mining permit. However, the U.S. EPA and mining companies have
repeatedly demonstrated that this model over-predicts the amount of PM dispersed by surface
10
mining facilities, resulting in denied air quality permits.
Past research has shown that haul trucks create the majority (80-90%) of PM emissions from
10
surface mining operations. Therefore, this research concentrated on improving the ISC3 model
by focusing on modeling PM emissions from mobile sources, specifically haul trucks at surface
10
mining operations.
Research into the ISC3 model showed that its original intended use was for facilities that emit
pollutants via smoke stacks. The method used to improve the ISC3 model consisted of applying
the dispersion equation used by the ISC3 model in a manner more representative of a moving
haul truck. A new model called the Dynamic Component Program was developed to allow
modeling of dust dispersion from haul trucks.
To validate the Dynamic Component Program, field experiments were designed and conducted.
These experiments measured PM from haul trucks at two different surface mining operations.
10
The resulting analysis of the Dynamic Component Program, ISC3 model, and the actual field
study results showed that the Dynamic Component Program was a 77% improvement over the
ISC3 model overall.
|
Virginia Tech
|
Chapter 1 Introduction
1.0 Statement of the Problem
Mining operations acquire air quality permits prior to completing any new construction,
modifications, or expansions (VA DEQ, Air Permitting Guidelines, 1996). This can involve
estimating pollutant emissions from the existing surface mining operations, which can be a
complex and lengthy process. Modeling of emissions may be required depending upon the
amount of pollutants emitted. Many existing operations may not have air quality permits
because they were in operation before the regulations were enacted. But any new construction
that results in an expansion will require the facility to obtain an air quality permit. Contingent
upon the location of the surface mine, the undertaking of a modification or new construction
project may result in stricter air pollution controls having to be installed in order to obtain
approval from the permitting agency. The result may be that the entire facility, with the new
pollution controls, does not have the ability to meet the new air pollution requirements thus
resulting in a denial of approval from the permitting agency. Another possible impact of the
expansion is that the new pollution controls may be so costly to install and operate that they may
change the economics of the entire operation possibly causing the facility to lose economic
viability resulting in shutdown.
When issued, air quality permits require a facility to be built to operate below the PM
10
levels calculated by the dispersion model. One might think that over-prediction by the model
would result in a benefit to the mining industry. However, it does not; over-prediction hinders
mining because many facilities can be denied air quality permits on the basis of modeling results
(Cole and Zapert, 1995). Therefore, the mining industry needs a model that can accurately
predict PM levels.
10
1.1 Background
The United States Environmental Protection Agency (U. S. EPA) has created a list of
approved equations that attempt to quantify the amount of pollutants, including dust, that
emanate from specific operations throughout the entire industrial spectrum. These equations are
based upon observations and testing of specific industrial operations, and they attempt to predict
1
|
Virginia Tech
|
the amount of pollutants that form from these operations. The dust specific equations quantify
dust in the size ranges of 30 microns (:m) and below. A listing of the equations, called
emissions factors, can be found in the U. S. EPA(cid:146)s AP-42 (U. S. EPA, AP-42, 1995). These
emissions factors are used to determine the amount of pollutant produced by an operation and are
then input into a model for dispersion modeling.
Numerous models have been created that attempt to predict the dispersion of pollutants as
they are released into the atmosphere. Most address pollutants such as Oxides of Nitrogen
(NO ), Sulfur Oxides (SO ), Carbon Monoxide (CO), and Volatile Organic Compounds (VOC).
x x
These models do not generally apply to mining operations because mining operations do not
have significant emissions of these pollutants. The major pollutant emitted by mining facilities
is particulate matter less than 10 :m (PM ). Therefore, only the models that predict the
10
dispersion of PM are of interest to mining operations.
10
The ISC3 is the model that state and federal regulating agencies accept for use by mining
operations to estimate pollutant dispersion from mining facilities. It is used for estimating
concentrations of various pollutants such as CO, NO , SO , VOC, and lead (Pb); and it can be
x x
used for predicting concentrations of Particulate Matter, specifically PM (U. S. GPO, Code of
10
Federal Regulations, Title 40, Part 51, 2002). The ISC3 model has routines for short-term and
long-term applications, and it has a routine for estimating concentrations of pollutants from open
pits (Schnelle and Dey, 2000). This model can also calculate deposition of particulate matter but
it requires more inputs such as particle size distribution and particle density (U. S. EPA, User(cid:146)s
Guide Vol II, 1995).
Personnel representing the stone quarrying industry who have used the ISC3 dispersion
model state, (cid:147)The ISC3 model over-predicts the concentrations of PM generated from surface
10
mining facilities.(cid:148)1 This over-prediction of PM has also been documented in an U. S. EPA
10
study on surface coalmines in the western United States (U. S. EPA, Modeling Fugitive Dust
Phase III, 1995). Recently, the Texas Natural Resource Conservation Commission has
1 Cole, Clifford F., and Zapert, James G.; Air Quality Dispersion Model Validation at Three
Stone Quarries. (Washington D.C.: National Stone Association, January 1995) 2.
2
|
Virginia Tech
|
acknowledged that the ISC3 model over-predicts pollutant concentrations for near-ground
fugitive emissions (Ruggeri, 2002).
1.2 Proposed Solution
The overall goal of this research is to create a model that can more accurately predict the
dispersion of PM10 from surface mining operations than the existing ISC3 model. This goal will
be accomplished by meeting the following objectives:
In order to predict dust propagation more accurately than the ISC3 model, a mining-based
model must be created. Hauling at surface mines creates the greatest amount of PM emissions
10
for the facility. Therefore, this mining-based model will focus on modeling the dispersion of
PM from hauling operations. This model will be based upon the dispersion algorithm of the
10
ISC3 model, but it will estimate the dispersion of PM from haul trucks at surface mining
10
operations.
A field study will be conducted to validate the results of the new model. Since the new
model focuses on haul trucks at a surface mining operation, the field study will be conducted on
the same. The field study will sample dust from haul trucks at a surface mining operation in
order to test the new model. It will be designed to obtain information required so that
comparisons can be made between the old ISC3 model and the new model. In addition, the field
study will be conducted on haul trucks at actual mining operations to represent (cid:147)real-life(cid:148)
situations in order to make the new model as accurate as possible.
1.3 Methodology
This dissertation is separated into seven chapters, each presenting a necessary step
required to accomplish the overall project goal. Chapter Two presents a literature review on
modeling the dispersion of pollutants. Chapter Three covers the creation of the new model that
attempts to correct the over-prediction of the ISC3 model. Chapter Four discusses the
methodology behind the field study that is completed in order to validate the new model.
Chapter Five discusses the analysis of the gravimetric dust concentration results from the field
study. Topics covered are time-weighted-average dust concentration analysis, particle size
distribution of airborne dust, and instantaneous dust concentration analysis. Chapter Six
discusses the comparison of the results of the field study to the results of the ISC3 model and the
3
|
Virginia Tech
|
Chapter 2 Literature Review
2.0 Mining
A review of mining practice is necessary to understand where dust originates in mining
operations, and the environmental factors affecting dust emissions. Since the focus of this
research is being conducted on surface mining operations, the review will be concentrated on the
topic of surface mining practice.
2.1 Current Surface Mining
Surface mining operations can be categorized into three different types of mining
methods. They are placer, open-pit, and strip mining. Placer mining is generally associated with
alluvial deposits, and mining can be accomplished through dredging techniques. Placer mining
is used to mine out streambeds and is used to mine metals such as gold and tin. It can also be
used to mine sand and gravel.
Open-pit mines are associated with pipes or tabular deposits that consist of a pit that
expands as it goes deeper into the earth(cid:146)s surface. Open-pit mining is used to mine metals, such
as copper and gold. Quarries are a type of open-pit mine, but the material mined is rock which is
used to make crushed rock.
Strip mining is usually associated with laminar deposits. It is a type of open-pit mining
that starts at one end of a property and mining advances through to the other side of the property.
Strip mining mines the entire property, whereas open-pit mining generally mines out one portion
of the property. An open-pit or cut that extends across the width of the property is created. This
cut generally starts at one side of the property and a new cut is made adjacent to the initial cut
after the mineral is mined out. The material from the new cut is placed in the mined out area of
the old cut. Thus mining proceeds along the length of the property cut by cut with the mined out
areas being backfilled. The only open-pit is the cut. Strip mining is used to mine coal, lignite,
gypsum, etc.
In addition to the differences in the surface mining types, there are significant differences
from one mine to another. Differences from mine to mine can be in the geology of the mine, the
type of material extracted, the locations of the mines, and the type of equipment used in
5
|
Virginia Tech
|
conducting the surface mining operation. Although these differences can be significant, the
environmental and health and safety impacts from the mining operations are very similar.
Dust from surface mining operations affects the workers at the operations, and since there
is no ability to contain the dust from surface mining operations, it can also expand to neighboring
properties. Effects on neighboring properties can include health and safety effects on people and
animals, damage to property through the deposition of dust, visibility issues, and the nuisance of
the deposition of dust.
2.1.1 Surface Mining Practice
The surface mining operation is conducted in stages through separate operations. These
stages or operations can be classified as
1 removal of topsoil,
2 drilling and blasting of overburden,
3 removal of overburden,
4 removal of material containing the mineral,
5 processing of material containing the mineral, and
6 reclamation of the mined out area.
A brief description of each operation is given in the following subsections. Figure 2.1 shows
some of the equipment that is used in these mining operations.
2.1.2 Removal of Topsoil
The first step at a mining operation is to remove the topsoil from the area that is to be
mined. The topsoil is removed and stored in a location away from the mine area to be used for
later reclamation of the mine area. Federal law requires the removal and storing of topsoil for
coal mining operations, and state law may require the removal and storage of topsoil at
metal/nonmetal mining operations. Topsoil generally consists of a clayey and/or silty material
and can be dug without the use of explosives to loosen the material. In some cases, bulldozers
with rippers may have to rip the soil to loosen it. The removal of topsoil is usually completed
through the use of scrapers or loader and trucks. A loader removes the topsoil by loading it into
trucks that haul it to a different location for storage. The topsoil is then dumped and shaped into
a large storage pile by bulldozers. The truck then returns to the loader after
6
|
Virginia Tech
|
dumping and the cycle repeats itself. Scrapers are generally preferable to loader and trucks,
because they have the ability to load-haul-dump. Sometimes dozers are used with scrapers at the
loading and dumping end in order to push, load, and shape the storage pile. However, these
actions are normally not required.
2.1.3 Drilling and Blasting of Overburden
Once the topsoil is removed the overburden, which is generally a waste material
overlying the mineral deposit, is exposed. The overburden normally consists of rock and must
be loosened or broken by drilling and blasting the material. This is done by creating a pattern of
blastholes. The blastholes are usually laid out in a square pattern. When a small number of
blastholes is used, the blastholes are laid out in a line. Blastholes are created in the overburden
using a drill. These holes are drilled to a depth from 1.5 (cid:150) 30.5 meters (m). The number of
blastholes in a blast pattern can vary from as little as four to as many as several hundred. The
number and depth of blastholes is dependent upon the type of surface mining method used and
the location of the mine.
Once all the blastholes are drilled, they are loaded with explosive. The explosive used is
generally an ammonium nitrate and fuel oil mix because it is less expensive to use in comparison
with other methods of breaking rock. If water is encountered during mining, then a more
powerful type of explosive may be used. After the blast pattern is loaded with explosives, the
area is cleared and the explosives are set off. A chemical reaction in the explosive occurs,
releasing a tremendous amount of energy, which in turn breaks the rock. Once the explosives are
set off, removal of the overburden can begin.
2.1.4 Removal of Overburden
Removal of overburden is usually completed with a loader and a fleet of trucks. A loader
removes the overburden by loading it into trucks that haul it to a waste dump for storage. The
overburden is then dumped and the material is spread out over the waste dump by bulldozers.
The truck then returns to the loader after dumping and the cycle is repeated.
In coal strip mines, the overburden is moved differently. Since the overburden from the
new cut is moved to the adjacent old cut, the material is required to be moved only a short
distance. Sometimes this move can be completed using a loader and a fleet of trucks, at other
times bulldozers or draglines are used.
8
|
Virginia Tech
|
Bulldozers are used to push the overburden from the new cut into the adjacent old cut,
thus exposing the coal deposit. A dragline is a machine that looks like a crane with a large
bucket connected to the end of its wire cables. The wire cables control the operation of the
bucket. The dragline sits on the broken overburden of the new cut, which has been smoothed
using bulldozers. The bucket is swung out into the overburden and is dragged through the
overburden material to fill the bucket using the cables. Then the bucket is lifted and the dragline
turns ninety degrees so the bucket is over the adjacent old cut. The bucket is then dumped, the
dragline turns back ninety degrees, the bucket is swung out into the overburden, and the cycle is
repeated.
2.1.5 Removal of Material Containing the Mineral
The material containing the mineral is called ore. Removal of the ore sometimes may
require drilling and blasting. If so, then the steps in the drilling and blasting section are repeated.
The ore is removed using a loader and a fleet of trucks. A loader removes the ore by loading it
into trucks that haul it to a processing plant where the ore is processed to its final stage and
ultimately sold to the public.
2.1.6 Processing of Material Containing the Mineral
Processing of the ore consists of extracting the final product from the rock. In most
cases, crushing and grinding of the ore is completed. Generally, the extent of the surface mining
operation is the removal of the material from the ground and the crushing and grinding of the
ore. Further processing may be required depending upon the type of material to be extracted, but
this would be considered part of the processing phase and not part of the mining phase.
2.1.7 Reclamation of the Mined Out Area
Once mining is completed and the mine site is mined out, reclamation begins.
Reclamation consists of making the mined out areas usable again. It differs for each of the
different surface mining methods. For open-pit mines, reclamation may or may not consist of
backfilling the pit. However, the surrounding areas, waste dumps, haul roads, etc. must be
cleaned up and revegetated. This process may require moving or reshaping the overburden piles
and any other disturbed areas. Normally loaders, trucks, scrapers, and bulldozers are used in the
reclamation operation.
9
|
Virginia Tech
|
For strip mining, reclamation generally occurs concurrently with mining, and the mined
out area must be put back to the approximate original contours. As mining advances, the
adjacent old cuts are mined out and made ready for reclamation. The old cuts are backfilled,
covered with topsoil, and revegetated. Again, loaders, trucks, scrapers, and bulldozers are used
in the reclamation operation.
2.2 Surface Mining Locations
Surface mining occurs in every state of the United States. The Office of Surface Mining (OSM),
which regulates surface coal mining in the United States, maintains that there are 2,526 surface
coal permits as of December 17, 2001; this translates to the number of mining locations in the
U.S. (OSM, U. S. Coal Production, 2002). Figure 2.2 shows where most of the surface coal
mining occurs by state. The Mine Safety and Health Administration (MSHA) maintains records
for the number of metal, non-metal, stone, and sand & gravel mines in the United States.
According to the MSHA database, updated in 2000, there are 229 surface metal mining
operations, 747 surface non-metal mining operations, 4,395 surface stone mining operations, and
8,394 sand & gravel operations in the United States (NIOSH, MSHA Data, 2002). Figures 2.3,
2.4, 2.5, and 2.6 show, in their respective order, the state-by-state concentration of the metal,
non-metal, stone, and sand & gravel mining operations in the United States. This data may
include operations that have been recently shutdown or temporarily closed. As can be seen from
the figures, surface coal mining is concentrated in the Appalachian and Western regions of the
United States with some surface coal mining occurring in the Midwest. The surface metal
mining is predominantly located in the western part of the United States. Surface non-metal
operations occur in every state with a few exceptions. Sand & gravel operations are all surface
operations, and they occur in every state. Surface stone mining also occurs in every state except
Delaware.
As of 1997 the U. S. Census Bureau states that there were 188,988 employees in mining
production, development, and exploration; 105,403 of these employees were associated with
surface mining operations (U. S. Census Bureau, Mining Subject Series, 2001). The MSHA
database for the year 2000 states that there were a total of 181,184 employees working at all
types of surface mining operations (NIOSH, MSHA Data, 2002). Both the U. S. Census Bureau
10
|
Virginia Tech
|
and the MSHA data exclude personnel categorized as office workers, but they include workers
categorized as mill or prep plant workers.
The common factor in surface mining is the use of mobile equipment to conduct the
mining operation. This mobile equipment can generate considerable amounts of dust, the effects
of which can be wide reaching. This dust has the potential to directly affect between 105,000 -
182,000 miners each year. However, in reviewing the extent of surface mining throughout the
United States, there is also the potential for dust to affect the general population outside of
mining operations, as most surface stone mining and sand & gravel operations are located in
relatively urban areas. The dust emissions from surface mine operations can have a negative
impact on the health and safety of the general public.
2.3 Dust
Dust can be found in sizes ranging from the sub-microns to more than 100 :m. Fog or
mists generally range in sizes between sub-micronic to 200 :m (Hinds, 2000). Dust tends to
create health problems in the respirable size ranges; 10 (cid:181)m or less. For comparison purposes, the
human hair is approximately 60 µm in diameter (California Air Resources Board, 2001).
Small dust particles are capable of being transported over long distances. As a result
modeling of facility emissions has been used to estimate the effects of the facility on the
surrounding area. Figure 2.7 shows the residence times of different size fractions of airborne
particles. The long residence times for the smaller size particles mean that these particles have
the potential to travel long distances, spreading their effects over a larger area. By contrast, the
larger size particles drop out quickly.
2.3.1 Definitions
There are two classifications of particulate matter (PM): primary PM and secondary PM.
Primary PM consists of material that is directly emitted into the air. Secondary PM is created by
chemical reactions occurring in the atmosphere to create particles (Seigneur, et. al., 1999).
Examples of primary PM are clay, soil, and silica. Examples of secondary PM are sulfate
compounds and nitrate compounds. PM consists mostly of primary PM. Secondary PM is
10
more of a concern when dealing with particulate matter less than 2.5 :m (PM ) (Seigneur, et.
2.5
al., 1999). This research will focus on respirable, thoracic or PM , and inhalable
10
16
|
Virginia Tech
|
categories consisting of primary PM. Secondary PM will not be considered as it is beyond the
scope of this research.
The American Conference of Governmental Industrial Hygienists (ACGIH) has
recommended standards for these categories of dust. However, these are not the only dust
categories with standards. In addition, the U. S. EPA has created standards for their own dust
categories: PM and PM . Table 2.1 shows the standards for the respirable, thoracic, and
10 2.5
inhalable dust categories recommended by ACGIH (Lippman, Chapter 5 Size-Selective Health,
1995). Also shown in Table 2.2 is the standard for PM as defined by the U. S. EPA (U. S.
10
GPO, Code of Federal Regulations, Title 40, Part 53, 2002). These standards show the percent
particulate mass for each aerodynamic diameter.
The particle sizes of 4.0 µm for respirable, 10 µm for thoracic, 100 µm for inhalable and
10 µm for PM are median sizes (D ). When comparing the thoracic and PM categories, both
10 50 10
categories are essentially the same because they have the same median size of 10 µm, but the
thoracic category contains some larger sized particles that the PM category does not
10
(Lippmann, Chapter 5 Size-Selective Health, 1995). Figure 2.8 shows that the ACGIH thoracic
standard may contain some particle sizes up to 25 µm, whereas the U. S. EPA(cid:146)s PM standard
10
will only contain particle sizes up to 15 µm.
The U. S. Atomic Energy Commission first created a respirable dust standard with a
median size of 3.5 µm. ACGIH adopted a modified version of this standard that also had a
median size of 3.5 µm. This standard was changed in 1993 in order to create an international
standard with a median size of 4.0 µm. This international standard is shown in Table 2.1
(Lippmann, Chapter 5 Size-Selective Health, 1995). However, the U. S. Department of Labor
adopted the earlier ACGIH modified version of the U. S. Atomic Energy(cid:146)s respirable dust
standard, and it currently applies to mining operations. Table 2.3 shows the respirable dust
standard that is applied to the mining industry (Lippman, Chapter 5 Size-Selective Health, 1995).
The dust standards used in this research will be the ACGIH(cid:146)s recommended respirable and
thoracic standard, and the U. S. EPA(cid:146)s PM standard.
10
18
|
Virginia Tech
|
2.3.2 Effects of PM
10
PM has adverse effects on humans and animals. In order to understand the effects of
10
dust, particularly silica and coal dust, on the human respiratory system, a review of the
respiratory system is given. To better understand the impact of dust on humans the three regions
of the respiratory system will be examined: the extrathoracic region, which consists of the nose,
mouth, pharynx, and larynx; the tracheobronchial region, which extends from the trachea to the
terminal bronchioles; and the alveolar region, which contains the lungs (Hinds, 1999). This third
region, the alveolar region, is where most of the impact from respiratory dust occurs. The
extrathoracic and the tracheobronchial regions contain layers of mucus that help expel
respiratory dust, but the alveolar region, where oxygen exchange takes place, does not have this
mucus layer (Hinds, 1999). Instead, the alveolar region contains scavenging cells called
microphages, which migrate to the respirable dust particles and surround and digest them,
particularly if the particles are organic. However, mineral dusts are insoluble; therefore, the
microphages cannot digest these particles. Instead they attempt to move these particles to the
tracheobronchial region for expulsion. This action may take from months to years for the
particles to be expelled. Silica and coal dusts interfere with the microphages removal attempts
and are not expelled but instead cause scarring of the lung tissue, also known as fibrosis
(Wagner, 1980). This scarring of the lung tissue from silica or coal dust is also known as
silicosis or pneumoconiosis.
If silica or coal dust is a component of PM , then the effects of exposure pose a very
10
serious heath concern. In the U.S., silicosis, caused by crystalline silica, causes more than 250
deaths annually (MSHA, Labor Department Renews Push, 1997). There are three levels of
silicosis: chronic silicosis, which occurs after ten years of exposure; accelerated silicosis, which
occurs between 5-10 years of exposure; and acute silicosis which occurs within a few weeks to
five years of high exposure to silica (U. S. Department of Labor, Preventing Silicosis, 1996).
Silicosis has no cure and is generally fatal. Miners are susceptible to silicosis, both when
working underground and when working on the surface.
A simplified definition of black lung is a chronic disease occurring in miners that
develops over a long time period and is generally fatal (NIOSH, Criteria for a Recommended
Standard, 1995). Black lung is caused when coal dust is the major component in the air that is
23
|
Virginia Tech
|
breathed and occurs primarily in miners who work underground. Employees who work coal
stockpiles are also susceptible to black lung. Approximately 2000 workers die each year from
black lung (NIOSH, NIOSH Facts Mine Safety and Health, 1996).
Many epidemiologic studies have been completed that show that PM , by itself, causes
10
harm to humans. It has been shown that a 50 microgram/cubic meter (:g/m3) increase in the 24-
hour average PM concentration was statistically significant in increasing mortality rates by 2.5
10
- 8.5 % (U. S. EPA, Air Quality Criteria for Particulate Matter, 1996). For hospitalization due to
chronic obstructive pulmonary disease, PM caused a statistically significant increase by 6 - 25
10
% with an increase of the 24-hour average PM concentration by 50 :g/m3 (U. S. EPA, Air
10
Quality Criteria for Particulate Matter, 1996). Other studies show that children are affected by
short-term PM exposure, and that increased chronic cough, chest illness, and bronchitis were
10
associated with a 50 :g/m3 increase in the 24-hour average PM concentrations (U. S. EPA, Air
10
Quality Criteria for Particulate Matter, 1996). Long-term effects from PM are dependent upon
10
the exposure to PM over the life of the worker.
10
There are other adverse results from PM exposure in addition to the health effects.
10
PM affects visibility in the air and has also been thought to contribute to climate change. It is
10
known that small particles in the air hinder visibility, as the small particles scatter and absorb
light as it travels to the observer from an object. This action results in extraneous light from
sources other than the observed object being detected by the observer, thus impairing visibility
(U. S. EPA, Air Quality Criteria for Particulate Matter, 1996). Climate change may also occur,
because the small particles in the atmosphere absorb and reflect the radiation from the sun,
affecting the cloud physics in the atmosphere (U. S. EPA, Air Quality Criteria for Particulate
Matter, 1996). PM may also have an effect on materials such as paint, wood, metals, etc. The
10
effects are dependent upon the amount of PM in the atmosphere, the deposition of the PM on
10 10
the material, and the elemental composition of the PM (U. S. EPA, Air Quality Criteria for
10
Particulate Matter, 1996).
2.3.3 Regulations Pertaining to PM
10
There are two legislative acts which regulate the air quality from mining operations.
They are the Federal Coal Mine Health and Safety Act of 1969 which was amended by the
Federal Mine Safety and Health Act of 1977 (NIOSH, Criteria for a Recommended Standard,
24
|
Virginia Tech
|
1995), and the Clean Air Act of 1970 which was amended in 1977 and 1990 (Schnelle and Dey,
2000). The Federal Mine Safety and Health Act of 1977 regulates the amount of dust allowable
in air for health and safety purposes. The Clean Air Amendment of 1990 (CAA) regulates air
quality from facilities from an environmental perspective.
2.3.3.1 Health and Safety Regulations
The Federal Mine Safety and Health Act of 1977 was responsible for creating MSHA, the
agency which enforces safety regulations for mining operations. At that time a limit of 2.0
milligrams per cubic meter (mg/m3) for respirable dust for coal mining operations was enacted
(NIOSH, Criteria for a Recommended Standard, 1995). If more than 5% quartz or silica is found
in the respirable dust then the limit is determined by using the following formula (U. S. GPO,
Code of Federal Regulations, Title 30, Part 71, 2002):
10
Φ = (2.1)
%Quartz
where
M = Respirable dust limit in mg/m3, where 0 ≤ Φ ≤ 2.0
%Quartz = Percent Quartz or Silica found in dust as a fraction.
The American Conference of Governmental Industrial Hygenists also recommends this limit for
respirable dust. There are also recommended limits set for dusts containing other toxic
substances, such as lead, mercury, and arsenic. (Hartman, et. al., 1982).
2.3.3.2 Environmental Regulations
The CAA regulates emissions from any facility into the air and is addresses toxic
substances. It also creates the national ambient air quality standards (NAAQS) for the criteria
pollutants, CO, NO , SO , VOC, Pb, and PM (Schnelle and Dey, 2000). NAAQS has been in
x x 10
effect for PM since before 1987 (Watson, et. al., 1997). Facilities are not allowed to emit
10
levels of PM pollutants above the following standards:
10
(cid:147)Twenty-four hour average PM not to exceed 150 :g/m3 for a three year
10
average of annual 99th percentiles at any monitoring site in a monitoring area.
25
|
Virginia Tech
|
In Virginia, if a facility emits more than 227 metric tons of PM per year, then the
10
facility is considered to adversely affect the region and must meet more stringent ambient
permitting requirements (VA DEQ, Business and Industry Guide, 1996). Modeling of the
emissions from the facility will be required in order to obtain a permit (VA DEQ, Air Permitting
Guidelines, 1996). The requirements for modeling emissions vary from state to state. For
example, the state of Georgia has requirements that any facility that emits more than 91 metric
tons per year of a pollutant becomes a Title V facility. Title V pertains to regulations of
emissions of toxic pollutants from a facility. Once a facility is designated as Title V, it is
regulated under the strictest regulations, which include modeling of emissions (GA DNR, 1994).
Therefore, modeling may be an important part of obtaining an air quality permit, depending upon
the amount of PM emitted by the facility.
10
2.4 Dust Propagation Models
The results from modeling the emissions of a facility are used to ensure that the regional
air quality does not exceed the NAAQS or deteriorate the air quality further (Schnelle and Dey,
2000). If the modeling results show the facility will not cause the regional air quality to exceed
the NAAQS nor deteriorate the air quality, then the air quality permit will be granted. Otherwise
the air quality permit application will be denied. Therefore, it is important that the modeling
method accurately estimates the amount of pollutant a facility will emit and accurately estimates
the pollutant(cid:146)s dispersion. The use of a modeling method that over-estimates the amount of
pollutant emitted from the facility may result in denial of air quality permits.
2.4.1 Mathematical Algorithms
Modeling of pollutants is completed using mathematical algorithms. There are several
basic mathematical algorithms in use. They are the box model, the Gaussian model, the Eulerian
model, and the Lagrangian model (Collett and Oduyemi, 1997). The box model is the simplest
of the modeling algorithms. It assumes the airshed is in the shape of a box. The air inside the
box is assumed to have a homogeneous concentration. The box model is represented using the
following equation:
dCV
=QA+uC WH −uCWH (2.2)
dt in
28
|
Virginia Tech
|
where
Q = pollutant emission rate per unit area.
C = homogeneous species concentration within the airshed.
V = volume described by box.
C = species concentration entering the airshed.
in
A = horizontal area of the box (L*W).
L = length the box.
W = width of the box.
u = wind speed normal to the box.
H = mixing height.
This model has limitations. It assumes the pollutant is homogeneous across the airshed and it is
used to estimate average pollutant concentrations over a very large area. This mathematical
model is very limited in its ability to predict dispersion of the pollutant over an airshed because
of its inability to use spatial information (Collett and Oduyemi, 1997).
The Gaussian models are the most common mathematical models used for air dispersion.
They are based upon the assumption that the pollutant will disperse according to a (cid:147)normal(cid:148)
distribution. The Gaussian Equation generally used for point source emissions is given as
follows:
χ=
2πu
sQ
σ yσ
z
exp
−0.5 σy
y
2
exp
−0.5 σH
z
2
(2.3)
where
P = hourly concentration at downwind distance x.
Q = pollutant emission rate.
u = Mean wind speed at release height.
s
F, F = Standard deviation of lateral and vertical concentration distribution.
y z
y = crosswind distance from source to receptor.
H = Stack height or emission source height.
The terms F and F are the standard deviations of the horizontal and vertical Gaussian
y z
distributions that are used to represent the plume of the pollutant. These coefficients are based
29
|
Virginia Tech
|
upon the atmospheric stability coefficients created by Pasquil and Gifford, and they generally
become larger as the distance downwind from the source becomes greater. Larger standard
deviations mean the Gaussian curve or plume has a low peak and has a wide spread; smaller
standard deviations mean the Gaussian curve or plume has a high peak and has a narrow spread
(Oduyemi, 1994).
When using this equation for calculation of pollutant dispersion, there are some
assumptions that must be made in order for the equation to be valid. They are 1) the emissions
must be constant and uniform, 2) the wind direction and speed are constant, 3) downwind
diffusion is negligible compared to vertical and crosswind diffusion, 4) the terrain is relatively
flat, i.e., no crosswind barriers, 5) there is no deposition or absorption of the pollutant, 6) the
vertical and crosswind diffusion of the pollutant follow a Gaussian distribution, 7) the shape of
the plume can be represented by an expanding cone, and 8) the use of the vertical and horizontal
standard deviations, F , and F require that the turbulence of the plume to be homogeneous
y z
throughout the entire plume (Beychok, 1994). It can be seen that several of the assumptions are
not met when applying this equation for PM to surface mining operations, especially to haul
10
trucks. The emissions are not constant and uniform and there is deposition of the pollutant.
Downwind diffusion is not negligible compared to vertical and crosswind diffusion, because
downwind diffusion may occur due to the deposition of the dust.
The accuracy of this model to predict pollutant concentrations has been documented to be
within 20% for ground level emissions at distances less than one kilometer. For elevated
emissions the accuracy is within 40%. At distances greater than a kilometer the equation is
estimated to be accurate within a factor of two. The Gaussian model also has the limitation that
it cannot be used for sub-hourly prediction of concentrations (Collett and Oduyemi, 1997).
Eulerian models solve a conservation of mass equation for a given pollutant. The
equation generally follows the form (Collett and Oduyemi, 1997):
∂ c
i =−U⋅∇ c −∇⋅ c’U’ +D∇2 c + S (2.4)
∂t i i i i
where
U =U +U′
U = wind field vector U(x,y,z).
30
|
Virginia Tech
|
U = average wind field vector.
′
U = fluctuating wind field vector.
c= c +c′
c = pollutant concentration.
c = average pollutant concentration, denotes average.
′
c = fluctuating pollutant concentration.
D = molecular diffusivity.
S = source term.
i
The term with molecular diffusivity is neglected as the magnitude of this term is significantly
small. The turbulent diffusion term ∇⋅ c’U’ is modeled where the rate of diffusion is assumed
i
to be constant. It is modeled as c’U’ =−K∇ c , where K is an eddy diffusivity tensor. This
i i
tensor is simplified so that diffusivity transport is along the turbulent eddy vector making the
eddy diffusivity tensor diagonal and the cross vector diffusivities negligible, i.e.,
K 0 0
xx
K = 0 K 0 where K = K = K with K being horizontal diffusivity (Collett and
yy xx yy H H
0 0 K
zz
Oduyemi, 1997).
Equation (2.4) can be difficult to solve because the advection term −U ⋅∇ c is
i
hyperbolic, the turbulent diffusion term is parabolic, and the source term is generally defined by
a set of differential equations. This type of equation can be computationally expensive to solve
and requires some form of optimization in order to reduce the solution time required. Solutions
have been achieved by reducing the problem to one and two dimensions rather than using three
dimensions. However, no statement of the accuracy of the solutions of this model is made
(Collett and Oduyemi, 1997).
Lagrangian models predict pollutant dispersion based upon a shifting reference grid.
This shifting reference grid is generally based upon the prevailing wind direction, or vector, or
the general direction of the dust plume movement. The Lagrangian model has the following
form:
31
|
Virginia Tech
|
c( r,t) = ∫t ∫ p( r,tr’,t’) S( r’,t’) dr’dt’ (2.5)
−∞
where
( )
c r,t = average pollutant concentration at location r at time t
( )
S r’,t’ = source emission term
( )
p r,tr’,t’ = the probability function that an air parcel is moving from r(cid:146) at t(cid:146)
(source) to location r at time t
The probability function works as it is shown for sources consisting of gases; if the source of
emissions consists of particles, then more information must be incorporated into the function
such as the particle size distribution and the particle density (Collett and Oduyemi, 1997).
This mathematical model has difficulties in comparing its results with actual
measurements. This is due to the dynamic nature of the model. Measurements are generally
made at stationary points, while the model predicts pollutant concentration based upon a moving
reference grid. This makes it difficult to validate the model during initial use. To compensate
for this problem, the Lagrangian models are typically modified by adding an Eulerian reference
grid. This allows for better comparision to actual measurements, because it incorporates a static
reference grid into the model (Collett and Oduyemi, 1997).
These four mathematical models are the basic models used for air dispersion modeling.
There are many variations based upon these equations. Some variations add statistical functions
to represent the randomness of wind direction, wind speed, and turbulence. Other variations
include the introduction of site specific source terms. Because of the increased speed of
computational ability via personal computers, the model variations have become more complex.
This has resulted in the creation of a vast number of computer models for air dispersion.
2.4.2 Existing Industrial Computer Models
There have been many computer models created to predict pollutant dispersion from
industrial facilities. The following is a list of models that are accepted for use by the U. S. EPA
and a short summary of the purpose of each model is given (U. S. GPO, Code of Federal
Regulations, Title 40, Part 51, 2002):
32
|
Virginia Tech
|
The BLP model is used to estimate pollutant concentrations specifically for
aluminum reduction plants.
The Caline3 model will estimate pollutant concentrations from highways.
The CDM 2.0 model calculates pollutant concentrations for long-term averaging
times in urban areas.
The RAM model calculates pollutant concentrations for short-term averaging
times.
The UAM model is specifically used for estimating concentrations of ozone (O ),
3
CO, NO , and VOC for short term conditions.
x
The OCD model estimates pollutant dispersion from offshore or coastal sources.
The EDMS calculates pollutant concentrations from military or civilian airports.
The CTDMPLUS model estimates pollutant concentrations from sources in stable
and unstable weather conditions.
These models have specific applications and most are not applicable for mining facilities. In
addition, there are a myriad of other models available for use that do not have the acceptance of
the U. S. EPA.
One such model is TRACK, which is a long-range transport model used to study
atmospheric acid deposition. It uses a Lagrangian dispersion model. A study by Lee, Kingdon,
Pacyna, Bouwman, and Tegen used it to study the transport and deposition of Calcium in the
United Kingdom. The main conclusion of the study was that large sources such as cement
plants, iron and steel production and power generation contributed only a small amount of
calcium into the atmosphere. Most emissions of calcium occurred from small power generation
sources that did not have emissions controls. However, one source, agricultural soil emissions,
was not quantified and may produce up to 66% of the Calcium emissions. The study stated that
there are uncertainties in the deposition of calcium due to the many parameters involved in
calculating deposition, and that the deposition of Calcium offsets approximately 7% of the acid
deposition resulting from Sulfur in the atmosphere (Lee, et. al., 1999).
APEX is a model that calculates dispersion of pollutants from explosions. It uses an
Eulerian model to calculate the pollutant concentrations. The model analyzes the effect of the
upward convective flow, after the explosion, on the dispersion of the pollutant. A paper by
33
|
Virginia Tech
|
Makhviladze, Roberts, and Yakush presents the results of modeling a large scale (nuclear)
explosion and the results of a small-scale explosion. The modeling results of both explosions
demonstrated that the larger particles are more likely to settle out and less likely to be injected
into the upper reaches of the atmosphere than are the smaller particles (Makhviladze, et. al.,
1995).
The three dimensional GISS tracer transport model is a global dust model created by the
Goddard Institute for Space Studies. Tegen and Fung present a paper that describes the use of
this model to simulate seasonal variations of dust over the entire world. The inputs of this model
are dust sources from undisturbed areas, such as deserts and grasslands. Disturbed areas or
manmade sources were not included. The model uses deposition or gravitational velocities of
the particle sizes to predict the concentrations around the world. Many areas where actual
measurements were taken were found to be in agreement with the model results. There were
many areas, such as the Sahara Desert and the Australian Desert, where the model failed to
reproduce the seasonal variations of the dust plume. It was thought that these areas had more
manmade disturbance than was estimated which caused the failure of the model. The
concentrations of mineral dust in the atmosphere predicted by this model ranged from 1 to 25
(cid:181)g/m3 with some areas reaching 60 (cid:181)g/m3. Many of these concentrations correlated well with
actual measured concentrations. The higher concentrations generally occurred during the
summer months. The particle sizes were then divided into two categories, one being 1 - 10 (cid:181)m
and the other being 10 - 25 (cid:181)m. Again, as in all cases the smaller particles were able to stay in
the atmosphere longer than the larger particles (Tegen and Fung, 1994). Since this model only
uses sources from undisturbed areas, it shows that there is a significant amount of dust in the
atmosphere, before any contributions from manmade emissions sources. The size range of the
concentrations of this dust in the atmosphere is 1 - 25 (cid:181)m and no statistics of the quantity of
PM was provided.
10
There are many traffic models created to predict pollutant concentrations from vehicles.
Many predict concentrations of chemical pollutants from the vehicle(cid:146)s exhaust. One such model
is the SLAQ, Street Level Air Quality. This model was presented in a paper by Micallef and
Colls and can be used for city streets. While this model can predict the amount of PM from
10
both the vehicle(cid:146)s emission and the road surface, the main emphasis is on tailpipe emissions. It
34
|
Virginia Tech
|
uses a Gaussian plume model. Since traffic does not produce constant emissions due to varying
operation of vehicles (acceleration, deceleration, idle, uniform speed), a traffic model was used
to estimate the time a vehicle was in different operation modes. The paper did not include how
the frequency of vehicles was managed, but a frequency of 600 vehicles per hour was used. The
model was tested and performed well, having a correlation coefficient of 0.8 for the conditions
the modeled. It was observed during the study that the tailpipe emissions mass median diameter
was in the range of 0.14 - 0.25 (cid:181)m and did not dominate the mass distribution of vehicle-derived
airborne particulate matter. It was the other emissions, such as road dust and brake dust, that
dominated the mass distribution. (Micallef and Colls, 1999).
There are many other studies that have evaluated the Gaussian dispersion equation.
Goyal, Singh and Gulati conducted a study using two different Gaussian dispersion equations to
predict total dust concentrations from cement facilities in India. The dispersion was calculated
from the industrial stacks located at these facilities. Two models were evaluated: the ISI model
and the IITST model. This study modified the equations in order to use the meteorological data
in a manner that is specific to the climate of India. The study stated that the results showed
satisfactory comparison to actual observed values at the cement facilities, with the IITST model
being the better predictor of the two models. When reviewing the data presented in the study,
the IITST model comparison results generally over-predicted the actual results by a factor of 1.6
(Goyal, et. al., 1996).
In Canada, the Gaussian dispersion equation was used to predict the dispersion of
radioactive uranium from a Canadian uranium processing facility. This model also predicted
dispersion from the facility(cid:146)s industrial stacks. The study, completed by Ahier and Tracy, found
that the model(cid:146)s predicted concentrations were within a factor of 2 - 3 of the actual
concentrations. Overall, the study stated the Gaussian dispersion model provided reasonable
results compared to actual observations for Uranium concentrations. In reviewing the predicted
versus observed data presented in the study, the model would over-predict more often than it
would under-predict. However, it was difficult to pick out the trend because the over-prediction
and the under-prediction occurred almost equally (Ahier and Tracy, 1997).
These are just a few of the computer models available for predicting pollutant
concentrations. Many more models could be discussed, but only a small sample was chosen for
35
|
Virginia Tech
|
review in this section. A review of all the models listed show that they all have one thing in
common: they use one of the four mathematical models previously presented. In addition, the
review of the Gaussian models show that they consistently over-predict actual dust
concentrations.
2.4.3 Mine Specific Models
Air dispersion modeling has not bypassed the mining industry. There have been some
mine specific models created, most having been created specifically for underground mines. The
surface mines have generally adapted an existing industrial model for their use.
A report by Hwang, Singer, and Hartz discussed several models for predicting dust
dispersion in an underground entry by a turbulent gas stream, in other words (cid:147)the prediction of
dust dispersion after an explosion.(cid:148) The basic diffusion equation used, was defined:
∂c ∂c ∂2c ∂2c ∂2c
+U =k + + (2.6)
∂t ∂z ∂x2 ∂y2 ∂z2
where
c = dust concentration.
U = convection velocity.
k = diffusion coefficient.
x,y,z = directions of coordinate grid.
t = time.
This report derived from Equation (2.6) mathematical interpretations of the modeling process for
four different types of sources: point source, line source, moving line source, and flat plane. The
resulting modeling equations for each type of source are rather lengthy and not fully explained.
For example, the equation for an instantaneous point source in the plane z = z at the point (x ,
1 1
y ) emitted at time t = t is given as
1 1
−{( z−z )−U( t−t )}
Qe 4k1 ( t−t 1) 1 ∑∞ −km2π2( t−t 1) mπx mπx
c= 1+2 e a2 ⋅cos cos 1⋅
{ ( )}
2ab πk t−t
1
1 2 m=1 a a
(2.7)
∑∞ −kn2π2( t−t 1) nπy nπy
1+2 e b2 ⋅cos cos 1
a a
n=1
36
|
Virginia Tech
|
where
c = dust concentration.
U = convection velocity.
k = diffusion coefficient.
x,y,z = directions of coordinate grid.
t = time.
a = entry opening height.
b = entry opening width.
n = distance in a direction normal to the boundary or walls of the opening
(assumed to be for the y direction).
m = undefined, but assumed to be similar to n except for the x direction.
Q = point source emission strength.
The uncertainty is in the variables n and m. Explanations for these variables were insufficient to
fully understand what the authors meant by these variables. Results of calculations were
completed, but no comparisons of calculated results to actual results were given as it was stated
that there were no measured observations available (Hwang, et. al., 1974).
Courtney, Kost, and Colinet completed a study that defined dust deposition in
underground coal mine airways. The main emphasis of this study was to determine an optimum
schedule for rock dusting entries in an underground coal mine by using an airborne particle
deposition model. Testing was completed at eight locations in five U. S. underground coal
mines. The deposition model in this study was based upon a model created by Dawes and Slack
in 1954. Their model was based upon the deposition of coal dust in a small laboratory wind
tunnel. The resulting model is defined:
∂m −Kx
= Kc= Kc exp (2.8)
∂t 0 vH
where
∂m
= dust deposition rate.
∂t
K = rate constant, taken as Stoke(cid:146)s sedimentation velocity.
K =kD2 (2.9)
37
|
Virginia Tech
|
where
D = particle diameter.
k = Stoke(cid:146)s sedimentation constant.
( )
ρ−σ g
k = (2.10)
18η
where
ρ = particle density.
g = acceleration of gravity.
σ = density of air.
η = viscosity of air.
x = distance of deposit from the dust source.
c , c = airborne dust concentration of particles of diameter D at the dust
0
source and at x, respectively.
v = air velocity.
H = height of airway.
This model was found to have satisfactory results for particles with diameters less than 40 (cid:181)m,
and the exponential decay with distance agreed with their experimental results (Courtney, Kost,
and Colinet, 1982). Courtney, Kost, and Colinet(cid:146)s study also stated that Bradshaw and Godbert
completed a study of the deposition rate of dust in the return airway of underground coal mines.
The results of this study showed an exponential decay rate, but the first 23 m from the source
was found to have 2 to 4 times more dust deposition than was calculated (Courtney, Kost, and
Colinet, 1982). Ontin was stated to have completed studies on dust deposition in underground
coal mines. Ontin found that the deposition rate also decayed exponentially, and that 50% of the
airborne dust settled out within 1.8 m of the source (Courtney, Kost, and Colinet, 1982).
Therefore, experimental testing was demonstrating that Equation (2.8) may be under-predicting
the deposition rate of dust at distances close to sources. Through testing, Courtney, Kost, and
Colinet found that the deposition rate in pounds per square foot per hour was independent of the
airborne particle size, but increased with increasing total airborne dust concentration. Their
recommended deposition model was presented:
38
|
Virginia Tech
|
∂m K V { }
= 1 c exp − Ax (2.11)
∂t S 0
where
∂m
= dust deposition rate.
∂t
K = a proportionality constant, found to be 15.6 in this study.
1
A = K /v
1
x = distance along the airway.
c = initial dust concentration.
0
v = air velocity.
V/S = volume/surface area of the airway.
This equation was stated to be correct if the airflow is turbulent in the airway and not laminar,
and if the rate of deposition is exponential with distance (Courtney, Kost, and Colinet, 1982).
The result of the study found that this model could be used for determining an optimum rock
dusting schedule for an underground coal mine, but that further testing should be completed at
many other mine sites because of the variability from one mine location to another (Courtney,
Kost, and Colinet, 1982).
Courtney, Cheng, and Divers completed a study for underground coal mines in 1986
titled (cid:147)Deposition of Respirable Coal Dust in an Airway.(cid:148) The study stated that the (cid:147)rate of
decrease of the airborne concentration must be equal to the deposition of the airborne particles
onto the surfaces of the airway.(cid:148)5 This was represented by the following equation:
∂c ∂m
−vA = L (2.12)
∂x ∂t
where
v = air velocity
A = cross-sectional area of airway.
c = local dust concentration.
x = distance along airway.
5 Courtney, Welby G.; Cheng, Lung; and Divers, Edward F.; (cid:147)Deposition of Respirable Coal
Dust in an Airway.(cid:148) U. S. Bureau of Mines Report of Investigation 9041. (U. S. Department
of the Interior, 1986) 3.
39
|
Virginia Tech
|
∂m
= rate of dust deposition per unit area along airway.
∂t
L = deposition surface across airway.
L = perimeter if dust deposits on roof, walls and floor.
L = width of airway if dust deposits only on floor.
If the rate of dust deposition was dependent upon local dust concentration as stated in the study
completed by Courtney, Kost, and Colinet, then Equation (2.12) could then be represented as
∂c
− Av = Lkc (2.13)
∂x
where
The terms are the same as given in equation (2.12).
k = dust deposition rate constant.
c −L
=exp kx (2.14)
c AV
0
where
c = dust concentration at the source.
0
c = dust concentration at a distance x from the source.
Experiments to test the deposition of dust with varying air velocities and relative
humidities were conducted at an underground limestone mine. It was thought that deposition
might depend upon Stokes(cid:146) sedimentation velocity. But it was found that the deposition was
dependent upon air velocity and that large and small particles deposited at similar rates along the
first 91 meters distance from the source in the airway. The larger particles had fully deposited by
152 meters distance from the source. The rough surface of the walls of the limestone mine were
thought to effect the deposition of smaller particles by trapping the larger particles. The
dependence of particle deposition on air velocity in the airway implied a change in the airborne
particle size distribution, which remained to be explained (Courtney, Cheng, and Divers, 1986).
The results of the study demonstrated that the median particle sizes were higher at the
floor of the airway (6.5 (cid:181)m) than at the roof of the airway (4.7 (cid:181)m) at 30.5 meters away from the
source. At distances 152 - 213 meters away from the source, the median diameters were closer
together (4.9 (cid:150) 4.5 (cid:181)m). Respirable dust deposition rate was shown to decrease as a function of
distance from the source. At low air velocities, the deposition rates were linear. At higher air
40
|
Virginia Tech
|
velocities, the deposition rates decreased as the distance from the source became greater.
Relative humidity was found to have a negligible effect on the dust deposition rate (Courtney,
Cheng, and Divers, 1986).
Ratios of deposition rates of dust onto the floor, walls, and roof of the airways were also
presented. These deposition rates were dependent upon particle size, and the floor deposition
rate was greater than the roof and wall deposition rate. The ratios were established by studies
conducted by Pereles and Owen (Courtney, Cheng, and Divers, 1986).
Bhaskar and Ramani wrote a series of papers that describe a modeling method for the
deposition of respirable dust in an underground coal mine. This series of papers is related to
Ragula Bhaskar(cid:146)s doctor of philosophy dissertation titled (cid:147)Spatial and Temporal Behavior of
Dust in Mines (cid:150)Theoretical and Experimental Studies,(cid:148) completed at Penn State University in
1987. The mathematical model presented was defined:
∂c ∂2c ∂c
= E −U +sources−sinks (2.15)
∂t x∂x2 ∂x
where
c = concentration of airborne dust.
t = time.
E = dispersion coefficient.
x
x = distance from source.
U = velocity of airflow.
The source term represents dust generated by cutting mechanisms in the underground
mine and the sink term refers to the deposition of the dust on the floor, walls, and roof of the
airway (Bhaskar, Dust Flows in Mine Airways, 1989). This mathematical model is applied to all
the particle size intervals that are represented in a dust cloud generated from a mining operation.
Results of comparison of the model to experiments conducted in an underground airway
showed that the model predicted deposition of the dust in airways satisfactorily. The model
tended to predict better at lower airway air velocities than with higher air velocities. Also, total
dust size was better predicted than the respirable dust size (Bhaskar, 1987).
Detailed explanations for the processes used in creating this model are given in Ramani
and Bhaskar(cid:146)s (cid:147)Dust Transport in Mine Airways.(cid:148) The processes considered are particle
41
|
Virginia Tech
|
deposition, deposition by convective diffusion, deposition due to gravity, coagulation, collision
mechanisms, and re-entrainment (Ramani, Dust Transport in Mine Airways, 1984). Particle
deposition is related to mass transfer of a particle to the immediate adjacent surface; this
represents deposition onto the roof and walls of the airway, and is represented by Brownian
diffusion, eddy diffusion, or sedimentation. Deposition by convective diffusion refers to
deposition caused by eddies in turbulent flow and represents deposition onto the walls.
Deposition due to gravity uses the particle(cid:146)s gravitational velocity to determine the deposition of
the particle onto the floor of the airway. Coagulation and collision mechanisms are related and
are based upon the interaction of the particles with one another. These two processes are
important in determining the airborne particle size distribution, and therefore, important in
determining the amount of dust deposited onto the airway surfaces. They take into account
forces such as electrostatic charge, Van der Waals forces, and the nature of the colliding
particle(cid:146)s surfaces. Re-entrainment evaluates the amount of dust that is generated from dust that
has already been deposited. Dust may be re-entrained due to the shear forces from the velocity
of air in the airway exceeding the cohesive force of the particle on the surface. This process is
dependent upon the air velocity in the airway (Ramani, Dust Transport in Mine Airways, 1984).
Xu and Bhaskar wrote a paper in 1989 which determined the turbulent deposition
velocities for coal dust in an underground mine airway. This study showed that the turbulent
deposition was independent of particle size but dependent upon particle density as air velocity
increased. It was stated that particle properties and air velocities may influence gravitational
velocities more than turbulent deposition velocities (Xu and Bhaskar, 1995).
Very few models have been created for surface mining operations. Cole and Fabrick
discuss pit retention of dust from surface mining operations. They discuss a study completed by
Shearer that states that approximately one third of the emissions from mining activities escapes
the open-pit. Further discussions are completed on a proprietary model by Winges, which
calculates the mass fraction of dust that escapes an open-pit. This mathematical model is given
(Cole and Fabrick, 1984):
1
ε= (2.16)
V
1+ d H
K
z
42
|
Virginia Tech
|
where
ε = mass fraction of dust that escapes an open-pit.
V = particle deposition velocity.
d
K = vertical diffusivity.
z
H = pit depth.
Fabrick also created an open-pit retention model based upon wind velocity at the top of the pit.
This model is given (Cole and Fabrick, 1984):
C1 w
ε=1−V +ln (2.17)
du 2 4
where
ε = mass fraction of dust that escapes an open-pit.
V = particle deposition velocity.
d
u = wind velocity at the top of the pit.
C = empirical dimensionless constant equal to 7.
w = pit width.
The deposition velocity in both models was based on a gravitational settling velocity determined
by Stoke(cid:146)s law. A comparison was completed using both models and the results agreed well
with each other and the study by Shearer that stated one third of the emissions from mining
activities escape the open-pit.
Several open-pit dust models are discussed in a study on (cid:147)Dispersion of Airborne
Particulates in Surface Coal Mines,(cid:148) completed for the U. S. EPA by TRC Environmental
Consultants. These include the models previously discussed by Cole and Fabrick. Another
model created by Herwehe in 1984 is described. This model is a computer simulation using
finite-element analysis. It takes into account many factors such as wind conditions, surface
roughness, complex terrain, atmospheric stability, pollutant sources, particulate terminal settling
and deposition velocities, and surface particulate accumulation (TRC Environmental, 1985).
However, it was stated that this model may not give good results for open-pits with pit angles
greater than 35 degrees from the horizontal or in stable atmospheres. This model also has not
been tested with field results (TRC Environmental, 1985). Another model, the FEM (3
Dimensional Galerkin Finite Element Model), which was not created specifically for the mining
43
|
Virginia Tech
|
industry, was mentioned as one that could be modified for use in predicting dispersion of dust
from open-pits. Its drawback was that it required a very large computer to run the model. This
model has also not been tested with field data.
Modeling of dust dispersion for specific mining operations has been completed for the
blasting phase. At the Kalgoorlie Consolidated Gold Mines Pty Ltd, a computer program was
created to determine dust dispersion from blasting operations. This program uses meteorology,
bench height, blast design information, and rock density to predict the behavior of dust from
blasting. It accounts for some absorption of the dust on the pitwalls and for some reflection of
the dust off the pitwalls. The dust concentrations are calculated using settling velocities for
different particle sizes and densities. The program is used to determine if blasting will have an
impact on a nearby town (Wei, et. al., 1999). Another model for predicting dust dispersion from
blasting operations has been created by Kumar and Bhandari. This model uses a gradient
transport theory or an Eulerian approach. This model considers atmospheric stability and wind
velocity and direction for computing dust concentrations at different distances from the blast
(Kumar and Bhandari, 2002). No mention of any field validation has been presented in either of
these two articles.
Pereira, Soares, and Branquinho used a Gaussian dispersion equation to predict dust
concentrations from the stockpiles of an operating surface mine in Portugal. This equation was
used to create risk maps of air quality for locations surrounding the mine site. It was mentioned
that these risk maps should be viewed as extreme risk maps. No experimental validation was
performed to determine the accuracy of these maps to actual conditions (Pereira, et. al., 1997).
Very few mine specific models have been created for surface mining operations, but it
can be seen that a great deal of research has been conducted on modeling dust deposition for
underground mining operations. While this research is not directly applicable to surface mining
operations, it is a good basis for characterizing the prediction of dust concentrations and the
deposition of dust, as underground mining openings are a controlled environment. This
controlled environment facilitates prediction of concentration and deposition, because the
variability due to wind speed and wind direction can be controlled. Dust dispersion modeling for
surface mining operations is generally completed using an established model. The model used
for surface mining operations is generally the ISC3 model created by the U. S. EPA.
44
|
Virginia Tech
|
a format readable by the ISC3 program. The data are then read into the ISC3 model for use in
Equation (2.18).
Once all the data are entered, Equation (2.18) calculates the PM concentration P at the
10
coordinates of the receptor. Generally, there is more than one receptor and they are aligned in a
grid format. PM concentrations P are calculated for each receptor point in the grid, with the
10
emission source being stationary. These calculations are completed for all the hourly
meteorological data. The hourly results are then averaged, either for a 24-hour period or a yearly
period, and input into the receptor grid. This resulting grid of PM concentrations allows for the
10
creation of contour maps of the dispersion modeling results, where the contours represent the
concentrations of PM . The User(cid:146)s guide, Volume I and Volume II written by the U. S. EPA,
10
explain in greater detail the procedure for the operation of the program.
There have been very few studies completed to determine the accuracy of the ability of
the ISC3 model to predict PM dispersion from surface mining operations. The U. S. EPA
10
completed a large-scale study at a surface coal mine in Wyoming in 1994 - 1995. This study,
issued in three volumes, reviewed the entire mining operation for dust dispersion. The emissions
factors from the U. S. EPA(cid:146)s AP-42 were used to determine the amount of emissions from the
operation. These emissions were then input into the ISC3 model and to complete dispersion
modeling. Field testing, to validate the ISC3 model, was completed by placing six PM
10
sampling stations throughout the surface mining operation. The sampling equipment used at
each station was the Wedding PM Reference Sampler. These six stations were used in
10
addition to three existing PM sampling stations that were located at the mine site to fulfill air
10
quality permitting requirements (U. S. EPA, Modeling Fugitive Dust Phase I, 1994). The
sampling stations were placed on both the upwind and downwind side of major excavating
operations. Weather data were recorded throughout the duration of the test, and time studies of
equipment operation were completed. The testing occurred over a time interval of two months,
with air sampling occurring every other day (U. S. EPA, Modeling Fugitive Dust Phase I, 1994).
The modeling results of the operations were compared to the actual measurements from the
sampling network.
The study documents that there is a significant over-prediction of PM emissions from
10
the surface coal mining operation by the ISC3 model (U. S. EPA, Modeling Fugitive Dust Phase
46
|
Virginia Tech
|
III, 1995). This report has a statistical protocol that defines significant over-prediction as an
over-prediction of more than a factor of two at a single site where modeled vs. measured results
are compared (U. S. EPA, Modeling Fugitive Dust Phase II, 1994). No attempt to determine the
source of the over-prediction of PM was made in this study.
10
Cole and Zapert completed a study, submitted to the National Stone, Sand, & Gravel
Association (NSSGA), to test the ISC3 model at three Georgia stone quarries. It was stated that
the ISC3 model had a history of over-predicting particulate concentrations based upon data
obtained by the Department of Energy(cid:146)s Hanford, Washington site (Cole and Zapert, 1995).
This study calculated emission rates for operations, modeled the dispersion of the emitted
particulates, and completed a comparison of modeled versus measured particulate concentrations
for each of the three stone quarries. The model testing methodology was similar to that
employed in the previously mentioned U. S. EPA(cid:146)s report titled (cid:147)Modeling Fugitive Dust
Impacts from Surface Coal Mines, Phase I - III.(cid:148) The number and type of PM sampling
10
stations is unknown. However, it can be determined that there were at least two sampling
stations at each site because there was a primary downwind site and a site located upwind of the
prevailing winds to allow for subtraction of ambient PM concentrations. Once the comparison
10
of modeled versus measured results was completed, it was determined that the model over-
predicted the actual PM concentrations by a range of a factor of less than one (87% over-
10
prediction) to a factor of five (Cole and Zapert, 1995).
This study concluded that there could be two reasons for the over-prediction of the PM
10
concentrations by the ISC3 model. One was that the model failed at that time to account for any
deposition of the particulates. The other reason was that the emissions factor for unpaved roads
over-predicts the amount of emissions from haul trucks. The emissions factor was cited as a
possible cause of over-prediction because during the study, it was noted that the hauling
operations contributed 79-96% of the PM emissions from the entire quarrying operation (Cole
10
and Zapert, 1995). The U. S. EPA has been modifying a deposition routine for the ISC3 model,
but no literature has been found where testing has been completed using the deposition routine in
the ISC3 model. Cole and Zapert used an initial deposition routine created by the U.S. EPA, and
found that it reduced the modeled results by 5%. Even with this reduction in modeled PM
10
47
|
Virginia Tech
|
concentrations, there is still a significant over-prediction. This has led the NSSGA to embark on
a series of studies that attempt to better quantify the PM emissions from haul trucks.
10
The blasting operation at surface mines as a possible cause of the over-prediction by the
ISC3 model was eliminated. One reason is that there are no reliable emissions factors in the U.S.
EPA(cid:146)s AP-42 to calculate the amount of PM that could be emitted from blasting (Cole and
10
Zapert, 1995). Another reason is that the U.S. EPA considers the contribution of PM from
10
blasting operations to the emissions of PM from the entire mining facility to be small because
10
blasting is conducted infrequently not continuously (U. S. EPA, AP-42, Western Surface Coal
Mining, 1998). Therefore, the U.S. EPA has not pursued an accurate emissions factor in AP-42
nor has it emphasized modeling blasting emissions from surface mining operations.
Recently, Reed, Westman, and Haycocks completed a study on the ISC3 model using a
theoretical rock quarry. This study also concluded that hauling operations contributed the
majority of PM concentrations and that the haul truck emissions factors may be part of the
10
cause of the over-prediction of PM concentrations by the ISC3 model (Reed, et. al., An
10
Improved Model, 2001). However, further analysis of the data provided by the Cole and Zapert
study presented another hypothesis explaining the cause of the ISC3(cid:146)s over-prediction of PM
10
concentrations. This hypothesis stated that the ISC3 model was predicting concentrations from
stationary sources; however, in mining, the majority of the sources producing PM are moving
10
or mobile sources. Therefore, further investigation of the dispersion of PM from haul trucks at
10
surface mining operations was recommended. It was recommended that this investigation
include revising the ISC3 model to accommodate these moving sources.
2.5 Prior Field Studies of Dust Propagation at Surface Mine Operations
Field studies measuring dust concentrations have been completed at surface mining
operations. Two studies, already mentioned, (cid:147)Modeling Fugitive Dust Impacts from Surface
Coal Mining Operations - Phase I, II, & III(cid:148) by the U. S. EPA and (cid:147)Air Quality Dispersion
Model Validation at Three Stone Quarries(cid:148) by Cole and Zapert, form the basis for completing
the research for improving the modeling method for surface mining operations.
2.5.1 Olson and Vieth Haul Road Field Study
In 1987 Olson and Vieth completed a study titled (cid:147)Fugitive Dust Control for Haulage
Roads and Tailings Basins,(cid:148) which tested haul roads at a sand and gravel operation for dust
48
|
Virginia Tech
|
concentrations from haul trucks. This test was conducted to determine the effectiveness of the
use of dust suppressants on a haul road. Dust measurements were taken with the GCA RAM-1
dust monitors. This monitor was used without the cyclones; therefore, dust particles up to 20 :m
in size were measured. The stations were set up on berms along the downwind side of the haul
roads at a distance of 5 meters from the edge of the road. The trucks then passed the
measurement stations as the measurements were taken. One dust monitor was used for each
section of haul road; therefore three dust monitors were used.
The haul roads were tested both untreated and with treatments of AMS-2200 (a
petroleum derivative), Dustgard (a magnesium chloride (MgCl ) salt), Dust-Set (a resin), and
2
Haulage Road Dust Control (a wetting agent). The AMS-220 and the Dustgard were tested
simultaneously. The Haulage Road Dust Control testing was compared with water, since the
wetting agent, like water, is only a temporary dust control. The petroleum derivative and the
MgCl salt are more long-term dust control methods.
2
The Dust-Set (resin) was tested and cancelled after the average dust measurements after
application were 10.5 mg/m3 as compared to an average dust measurement of 5.1 mg/m3 for the
untreated area (Olson and Vieth, 1987). It was not stated why the treated section of haul road
had higher dust measurement readings than the untreated areas. An explanation was given that
the resin may work under different soil types and less severe traffic conditions.
For the petroleum derivative and the MgCl salt, three sections of haul road were used.
2
One was treated with the petroleum derivative, one was treated with the MgCl salt, and one was
2
left untreated as a control for comparison.
Three tests for the untreated, MgCl , and petroleum derivative were conducted at
2
different times. Table 2.4 shows the results of the tests and their average. It is assumed that each
measurement recorded for each test represents a haul truck passing the measuring device. The
control efficiency shown in Table 2.4 was calculated using the following equation (Olson and
Vieth, 1987):
T −B
%eff = 1− ×100 (2.19)
U −B
49
|
Virginia Tech
|
where
B = Background measurements.
U = Untreated test section measurements.
T = Treated test section measurements.
Test 1 and Test 2 seem to be very consistent. However, Test 3, the untreated section, had
an unusually high dust measurement level. It was mentioned that the humidity levels were lower
during Test 3 at 30% compared to 43% for Test 1 and 54% for Test 2 (Olson and Vieth, 1987).
Table 2.4 Results of haul road test conducted by Olson and Veith.
Treatment and site Test 1 Test 2 Test 3 Avg. of 3
Tests
Average Background, mg/m3 <0.02 <0.02 0.03 0.02
Average Dust Level, mg/m3
Untreated 1.52 1.29 12.5 5.10
MgCl 0.21 0.03 0.20 0.15
2
Petroleum Derivative 0.79 0.31 2.04 1.05
Control efficiency, %
MgCl 87.3 99.2 98.6 95.0
2
Petroleum Derivative 48.7 77.2 83.9 69.9
No mention was made of any differing traffic conditions, such as amount of traffic, types of
traffic vehicles, etc. The overall results show that the MgCl is a better dust control reagent.
2
The testing of the haul roads treated with Haul Road Dust Control (wetting agent) and
water was completed separately. The setup was similar to the petroleum derivative and MgCl
2
test. In this case, the wetting agent and the water were applied to their corresponding sections of
haul road. Dust measurements were taken with the same type of equipment as before. The time
measurements were taken at timed intervals from the application of the wetting agent and the
water. This resulted in average dust levels from the haul road. In reviewing the results, there is
no trend that shows the wetting agent is better than water or vice versa (Olson and Vieth, 1987).
These results show the amount of dust that is generated from the haul roads by the haul
trucks. The measurements show the total dust concentrations from the entire dust plume of the
50
|
Virginia Tech
|
haul truck. No particle size distributions were able to be determined, and the amount of
respirable or PM could not be determined. This study is not able to provide information on
10
how the concentrations degrade away from the haul road because the sampling locations were
located at the edge of the haul road.
2.5.2 Page and Miksimovic Drilling Operation Field Study
Another study completed by Page and Maksimovic, titled (cid:147)Transport of Respirable Dust
from Overburden Drilling at Surface Coal Mines(cid:148) contains results that are significant to the
propagation of dust. It sampled respirable dust as it dispersed from a rock drill at a surface coal
mine. There were two sampling setups. One was to set a sampling station at the drill on the
downwind side, with another sampling station located on the upwind side. Then five other
sampling stations were arranged in an arc on the downwind side of the drill with the radius
ranging from 16 to 71 meters. The center of the arc was oriented with the predominant wind
direction. The other setup was to set a sampling station at the drill on the downwind side, with
another sampling station located on the upwind side. Then five other sampling stations were
arranged in a line going away from the drill in the predominant wind direction. The distance
between the sampling stations was 10 to 15 meters depending on the space available on the
bench.
Personal gravimetric samplers with a 10 millimeter (mm) cyclone operating at a flow rate
of 2.0 liters/minute (L/min) were used along with integrated gas bag samplers using constant
flow pumps. The gravimetric samplers with the 10mm cyclones sampled the respirable dust
while the gas bag samplers measured the amount of sulfur hexaflouride (SF ), a tracer gas, that
6
was released from the drill. The SF was released in an attempt to determine or isolate the
6
specific dust source since it was thought that dust from other sources in the area might
contaminate the dust sampler. A total of three gravimetric samplers and one gas bag sampler
were operated at each sampling station location. The samplers were placed at heights of 1.2 to
1.5 meters above the ground.
The results reported in this study were the relative contribution of the total downwind
worker exposure attributable to the drilling operation. These results were calculated using the
defined equation (Page and Maksimovic):
51
|
Virginia Tech
|
Q C
R= d t ×100 (2.20)
QC
t d
where
R = Relative contribution of the total downwind worker exposure
attributable to the drilling operation in percent.
Q = Mass emission rate of respirable dust from the drill in
d
milligrams/minute (mg/min).
Q = SF tracer gas release rate in mg/min.
t 6
C = SF tracer gas concentration at downwind location in mg/m3.
t 6
C = Respirable dust concentration at downwind location in mg/m3.
d
The results were reported in this manner because MSHA data show that the highwall
driller and driller helper are the number one and two positions, respectively, with the greatest
exposure to respirable dust (Page and Maksimovic, 1987). This format should allow one to
determine the amount of respirable dust attributable to the drilling operation that a person
downwind of the drill would receive.
The conclusions were that the highwall driller and driller helper may be exposed to high
concentrations of respirable dust, but personnel downwind of the drilling operation will be
exposed to minimal amounts of respirable dust. The sphere of influence of the drilling operation
was determined to be approximately 76 meters. Beyond this distance, the drilling operations had
no effect on respirable dust concentrations (Page and Maksimovic, 1987). The highest
contribution of dust from drilling was 42%, occurring at a distance of 29 meters downwind with
the average contribution being 13.6% (Page and Maksimovic, 1987). Respirable dust from
drilling operations tends to decay rapidly with distance, within 32 meters, but no explanations
were attempted to determine the cause of this rapid decay (Page and Maksimovic, 1987).
2.5.3 Singh and Sharma Surface Coal Mine Field Study
Singh and Sharma completed a yearlong study titled (cid:147)A Study of Spatial Distribution of
Air Pollutants in Some Coal Mining Areas of Raniganj Coalfield, India.(cid:148) It measured ambient
pollutant dispersion from surface mining operations in India to determine seasonal variation.
Suspended particulate matter was measured along with sulfur dioxide and nitrogen oxides; only
52
|
Virginia Tech
|
the suspended particulate matter information is pertinent. It is assumed that only total dust was
measured in this study, as no particle sizes were mentioned.
The dust was measured using high volume samplers stationed at various locations
surrounding the mining areas. No separate measurements of the individual mining operations,
such as drilling, loading, and hauling were made. The results showed that the dust concentration
levels differed between day and night. At areas surrounding underground operations the
difference was less than 50 (cid:181)g/m3 and for surface operations the difference was more than 50
(cid:181)g/m3 (Singh and Sharma, 1992). The seasonal variations in the minimum background levels of
dust concentrations had a range of 100 (cid:181)g/m3 for monsoon season, 150 (cid:181)g/m3 for summer
season, and 200 (cid:181)g/m3 for the winter and spring seasons (Singh and Sharma, 1992). It was stated
that the highest levels of dust concentration occurred around the mining areas, suggesting that
they are significant contributors to the dust background levels. From the data presented, it was
shown that the highest average dust concentrations were approximately 500 (cid:181)g/m3 for summer,
400 (cid:181)g/m3 for spring, 400 (cid:181)g/m3 for winter, and 300 (cid:181)g/m3 for monsoon season (Singh and
Sharma, 1992). No further analysis of the data were presented.
2.5.4 Merefield, Stone, Roberts, Dean, and Jones Surface Coal Mine Field Study
Merefield, Stone, Roberts, Dean, and Jones completed a study titled (cid:147)Monitoring
airborne dust from quarrying and surface mining operations,(cid:148) where dust deposition samples
were taken from the surrounding area of an open pit coal mining operation in South Wales in
England. These samples were taken using the improved British Standard Dust gage. These
samples were then analyzed to determine the components in the dust, such as feldspar, gypsum,
halite, dolomite, calcite, kaolinite illite, and chlorite. The components were then used to
determine the origins of the dust whether it was from the surface coal mining operation or from
some other source. This method of analyzing the dust components is called dust
(cid:147)fingerprinting.(cid:148) The objective of dust (cid:147)fingerprinting(cid:148) is to eliminate dust nuisance in the
planning stages of a mining operation by determining if the dust actually comes from that
operation (Merefield, et. al., 1995). This study analyzed the chemical composition of the dust
deposition samples, and did not present any results for dispersion or deposition of the dust.
53
|
Virginia Tech
|
2.5.5 Jamal and Ratan Surface Coal Mine Field Study
A study completed by Jamal and Ratan sampled dust from different operations at a
surface coal mine in India and analyzed it for several characteristics. The operations sampled
included drilling, blasting, hauling and dumping, and loading of material. Dust deposition was
measured at each of the operations, both upwind and downwind, using double-sided tape
attached to a stub to collect the sample. Characteristics of dust analyzed from each operation
included particle shape, particle size, and particle composition.
Particle shape was divided into several categories: angular, sub-angular, and sub-
rounded. Particle shapes were determined from the samples taken, and placed into these
categories, resulting in a particle size distribution. Drilling was found to have the most angular
particles (Jamal and Ratan, 1997).
Particle size distribution was determined for each sample. The size categories were based
on the following categories (Jamal and Ratan, 1997):
Superfine < 0.5 (cid:181)m.
Fine 0.5 to 2.5 (cid:181)m.
Medium 2.5 to 5.0 (cid:181)m.
Coarse 5.0 to 15.0 (cid:181)m.
Very coarse 15.0 (cid:181)m and above.
It was found that the respirable fraction of dust (up to 5.0 (cid:181)m) varied according to what activity
was occurring during the sampling. Respirable dust at mining operations was found to be in the
range of 20% to 43% of total particulate matter, while residential areas were higher, above 45%
(Jamal and Ratan, 1997).
Composition of the dust was broken into categories of free silica, silicate, iron oxides,
and coal particles. The percentage of coal particles was higher in coal handling situations, while
the percentage of silica was higher in overburden drilling operations. The percentage of iron
oxides and silicates was found to be small, though it may be significant (Jamal and Ratan, 1997).
This study recommended that air quality regulations based upon mass alone may not be enough
to control air pollution and prescribe control measures.
54
|
Virginia Tech
|
2.5.6 Organiscak and Page Cab Filtration Field Study
Organiscak and Page measured respirable dust to determine cab filtration efficiencies for
drills and bulldozers separately. In sampling the drills, four dust samplers were placed under the
drill shroud, and four dust samplers were placed inside the cab. The bulldozer had four dust
samplers located on each side of the dozer above the tracks, and four dust samplers were placed
inside the cab. Each sampler had a 10 mm cyclone operating at 2.0 L/min. The respirable dust
from the sampler was captured on a 37 mm coal dust filter cassette. These samplers were placed
on several types of drills and dozers at several different sites.
Testing was conducted to determine the amount of respirable dust that was generated
outside the cabs and the amount of respirable dust inside the cabs. The samples captured on the
37 mm filters were analyzed for silica to determine the amount of silica in the respirable dust.
The silica contents of the respirable dust both inside and outside the cab were examined. Cab
efficiencies were given in the results of this study. No information concerning dust dispersion
was presented in this study. It did present a methodology for measuring respirable dust at a
surface mining operation (Organiscak and Page, 1999).
2.5.7 Organiscak and Page Drilling Operation Field Study
Organiscak and Page completed another study titled (cid:147)Assessment of Airborne Dust
Generated from Small Truck-Mounted Rock Drills.(cid:148) This study tested the propagation of
respirable dust from surface truck-mounted rock drills and the orientation of the sampler inlets to
wind direction. Three dust samplers were used; one located at the drill deck and two located
12.2 to 30.5 meters downwind of the drill. Each sampler contained a real-time aerosol monitor, a
RAM-1 with data logger, and two personal respirable dust gravimetric samplers. The results
showed that high respirable dust concentrations ranging from 8.68 to 95.15 mg/m3 were found
next to the drill shroud, while the respirable dust concentrations were significantly reduced (1.37
to 2.69 mg/m3) at distances of 12.2 to 30.5 meters downwind of the drill (Organiscak and Page,
1995). Results for inlet orientation confirmed prior U.S. Bureau of Mines research that inlets
oriented parallel to wind direction tend to over-sample respirable dust concentrations, while
inlets oriented perpendicular to the wind tend to under-sample the respirable dust concentration
(Organiscak and Page, 1995).
55
|
Virginia Tech
|
2.5.8 California Environmental Protection Agency Road Field Study
Another study that was recently published by the California Environmental Protection
Agency(cid:146)s Air Resources Board is a pre-certification program that evaluates the results of a dust
suppressant chemical. This program evaluated results from a study that measured PM from
10
vehicles as they traveled a section of treated and untreated dirt road. The type of equipment used
is not listed but is stated to be consistent with test methods used by the U.S. EPA (California Air
Resources Board, 2002). One sampler was placed 100 meters upwind and a set of samplers was
placed at a distance of 2 meters on both sides of the road at heights of 1.3, 2.0, 2.5, 5.0, and 10.0
meters. Another set of samplers was placed 30 meters from the road on both sides of the road, at
a height of 2 meters. This set up was used for the untreated section and another similar setup
was used for the treated section of road (California Air Resources Board, 2002). Measurements
were taken for 17 vehicle passes. Table 2.5 shows the summary of the results from the upwind
sampler and the average of all the results of the adjacent road samplers. The results presented
are used to determine that the chemical dust suppressant has a control efficiency of
approximately 84% (California Air Resources Board, 2002). No analysis of the results was
conducted to examine the PM propagation or dispersion.
10
2.5.9 NSSGA Field Studies
The NSSGA embarked upon a series of studies to better define the emissions factor from
the U. S. EPA(cid:146)s AP-42 for hauling operations at surface mines. The series of tests, completed in
North Carolina, to test the AP-42 emissions factor used a complex sampling system that
consisted of large hoods with ductwork and fans to collect the dust emissions from haul trucks.
The emissions collected by this system were then sampled using the U.S. EPA reference method
201A. This reference method consists of a sampling nozzle, a PM cyclone, and a flow control
10
system (Richards and Brozell, 2001). The flow control system controls the flow rate of air
through the nozzle and PM cyclone. The dust sample enters the nozzle and flows through the
10
cyclone; the PM fraction is collected on a filter to obtain a gravimetric sample. Four of these
10
complex sampling systems, two on each side of the haul road, were used in the haul truck testing.
In addition, ambient PM Hi-Vol monitors were placed upwind. Additional samplers such as
10
PM Hi-Vol samplers, nephelometers, and cascade impactors were used as needed.
10
During testing, road surface moisture level, road silt content, stone production, number of
56
|
Virginia Tech
|
truck passes, wind speed, wind direction, and truck speed were all monitored (Richards and
Brozell, 2001). The results of the monitoring and sampling were used to determine the accuracy
of the emissions factors in AP-42 and in the development of new emissions factors that were
stated to be more accurate. The results of these tests were not used to analyze the dispersion or
propagation of PM from the haul trucks. The NSSGA also conducted similar tests on different
10
mineral processing equipment, such as crushers and screens, to better define the emissions factor
equations for these operations.
The NSSGA continued sponsoring testing at stone quarries, evaluating the air quality
impact of stone quarrying operations. Studies were conducted at three mine-site locations to
measure the amount of PM emitted from stone quarries and their associated plants.
2.5
Monitoring sites were placed within the quarry property boundary with one monitoring site
upwind from the plant and quarry and two monitoring sites located downwind. The sampling
equipment used in this series of studies was the model FRM-2000 PM monitors that are
2.5
manufactured by Rupprecht & Patashnick Co. (Richards and Brozell, 2001). These samplers
were operated 24 hours per day for thirty days. The results of the study demonstrated that there
was only a small amount of difference between the upwind and downwind sample results when
the wind direction was from the upwind monitor to the downwind monitor and traveling across
the mining site. One study in North Carolina showed the difference in PM to be only 0.7
2.5
(cid:181)g/m3 (Richards and Brozell, 2001). When the wind blew from directions across other sources
rather than across the mine site, the differences in the upwind and downwind sampling results
were greater. The results of this study were stated to establish the fact that other off-site sources
had greater emissions of PM and that mining operations are not a significant contributor of
2.5
PM (Richards and Brozell, 2001).
2.5
Another study completed by the NSSGA analyzed the deposition of PM in addition to
10
PM and TSP. Monitoring sites were placed within the quarry property boundary with one
2.5
monitoring site upwind from the plant and quarry at a distance of approximately 518 meters and
three monitoring sites located downwind at distances of 350, 670, and 975 meters from the plant
and quarry (Richards and Brozell, 2001). The model FRM-2000 monitors that are manufactured
57
|
Virginia Tech
|
by Rupprecht & Patashnick Co. were used for measuring PM , Andersen and General Metal
2.5
Works High-Volume samplers were used for measuring PM , and General Metal Works High
10
Volume samplers were used for measuring TSP (Richards and Brozell, 2001). These samplers
were operated 24 hours per day for fourteen days.
The upwind results for PM averaged 10.3 (cid:181)g/m3 and the downwind results averaged
2.5
9.3 (cid:181)g/m3 (Richards and Brozell, 2001). These results were similar to the results from previous
studies conducted by NSSGA for PM and reinforced the previously established fact that
2.5
mining operations are not significant contributors of PM . The results for PM and TSP
2.5 10
showed that both fractions had high dust concentrations for the first sampling point at 350 meters
away from the plant. No mention was made of the actual recorded results, but these
concentrations were said to be lower than NAAQS for 24-hour periods (Richards and Brozell,
2001). These dust concentrations quickly dropped to background levels, as defined by the
upwind sampling point, at the next or second downwind sampling point that was 975 meters
away from the plant (Richards and Brozell, 2001).
This review shows that thirteen field studies conducting dust measurements at surface
mine-sites have been completed with seven being conducted to measure dust concentrations from
the entire mining operations. One study examined cab filtration efficiencies of drills and dozers
while two studies evaluated dust dispersion from drilling operations. Drilling operations differ
significantly from hauling operations, but these evaluations are of significant importance as they
present possible procedures for use in conducting similar studies on haul trucks. The other three
studies measured dust from haul trucks, but the emphasis was placed on the amount of dust that a
truck creates. An evaluation of the dispersion of dust from haul trucks was not completed. This
lack of information for haul trucks warrants further investigation to characterize the dust
emissions of haul trucks.
2.6 Summary
A number of studies relevant to the research have been found on both modeling
techniques and mine specific models. Mathematical modeling techniques, dust propagation
models for underground mining, dust propagation models for surface mining, and the completion
59
|
Virginia Tech
|
of field testing for the measurement of dust from mining operations are all topics that have been
previously discussed. A review of the creation of dust dispersion models for surface mining
shows that eight different models have been created. Of these only three have been tested at
actual mining operations. The tested models are the Shearer model, which states that 1/3 of the
mining emissions escapes the open-pit; the blasting model created by Kalgoorlie Consolidated
Gold Mines, Ltd.; and the ISC3 model. The ISC3 model is the only model that is accepted by
the U.S. EPA for conducting dispersion modeling on surface mining facilities.
The U.S. EPA, which maintains the guidelines for air quality modeling, created the ISC3
model from past modeling algorithms in the ISC2 models (U.S. EPA, Users Guide Vol. I, 1995).
This model has become the basis for predicting PM concentrations from mining operations. A
10
study was first conducted by the U.S. EPA to determine its validity for surface mining
operations. This study titled (cid:147)Modeling Fugitive Dust Impacts from Surface Coal Mining
Operations - Phase I, II, & III(cid:148) was conducted at a western surface coal mine and compared
modeled results with actual field measurements. The results of the study stated that the ISC3
model over-predicted the amount of PM from the mining operation by more than a factor of
10
two (U.S. EPA, Modeling Fugitive Dust Phase I, 1994).
Cole and Zapert studied three stone quarries in Georgia testing the ISC3 modeling results
with actual field measurements. This study titled (cid:147)Air Quality Dispersion Model Validation at
Three Stone Quarries(cid:148) stated that the ISC3 model over-predicted PM concentrations from
10
mining operations by a factor of 2 - 5 over actual surface mine emissions (Cole and Zapert,
1995). Cole and Zapert also state that haul trucks contribute to most of the PM emissions from
10
surface mining operations.
The fact that the ISC3 model over-predicts PM concentrations from surface mining
10
operations and that haul trucks generate the majority of PM that is emitted by the surface
10
mining facility are the reasons for conducting further research to improve the accuracy of the
ISC3 model. Analyzing the dust generation and dispersion from haul trucks at surface mining
operations will allow for this improvement. The field studies reviewed show previous research
has been conducted on PM emissions and dispersion from the entire mine site. While this is
10
meaningful research, analyzing the specific operations conducted at a mine site will yield results
that can be used to improve PM modeling. Evaluating PM dispersion from entire sites has
10 10
60
|
Virginia Tech
|
Chapter 3 Dynamic Component Program Development
3.0 Introduction
The goal of the research is to create a model that can more accurately predict the
dispersion of PM from surface mining operations, and the literature review has shown that no
10
models currently exist that can accomplish this task. Currently, the ISC3 is the best available
model for estimating PM dispersion but it over-predicts by a factor of 2 - 5. The lack of
10
accurate models has led to the need for a new mining-based model that attempts to correct the
over-prediction of the ISC3 model. Chapter Three outlines the development of the Dynamic
Component Program completed through this research to meet the needs of the mining industry.
The chapter reviews the issues affecting the modeling process and the emissions factors input
into the model. A study, completed by Cole and Zapert, hypothesized that the emissions factors
used to calculate emissions from each sub-operation at mine sites or from the mining equipment
are inaccurate and cause the over-prediction by the ISC3 model (Cole and Zapert, 1995). This
hypothesis is examined and found not to be the only cause of the model over-prediction. The
real cause of model inaccuracy is reviewed in this chapter and steps to correct this inaccuracy are
developed and implemented in the new Dynamic Component Program. Finally, Chapter Three
reviews the operation of the new model and compares its results with the ISC3.
3.1 Factors Affecting the Modeling Process.
Modeling of dust at surface operations is a complex process, and there are many factors
influencing the dispersion of dust. Some of these factors are meteorological conditions, such as
wind speed and direction; temperature; relative humidity; rainfall amounts and frequency;
topographical conditions of the surface mine; topographical conditions of the surrounding areas;
vegetation types of the surrounding area; and physical properties of the dust, such as shape,
density, and size distribution. Because a number of these factors involve inputs into the
modeling routine, they will be reviewed in the following sections.
3.1.1 Meteorological effects
Meteorological factors, including temperature, wind, and rainfall, have an impact on
PM transport, deposition, and dispersion. Temperature controls the production of wind that
10
causes turbulence over the surface of the earth, which in turn affects the dispersion of PM
10
62
|
Virginia Tech
|
(Schnelle and Dey, 2000). Temperature may also create inversions, which limit the altitude
PM can reach in certain areas (Schnelle and Dey, 2000).
10
The wind direction will determine the direction that dust will travel, as wind is the
primary transport mechanism. Wind speed causes PM to disperse. This dispersion can be seen
10
by the fact that the concentration of PM at its source will be higher than the concentration of
10
PM at a distance away from the source if the wind speed is high (Schnelle and Dey, 2000).
10
High wind speed can also inhibit PM deposition. In addition, wind speeds of 5.4 meters/second
10
(m/s) or greater have the ability to acquire, transport, and disperse dust without any mechanical
disturbances (Hesketh and Cross, 1983).
Meteorological data, especially wind data, can vary tremendously from site-to-site. The
modeling routines that handle meteorological data are used consistently throughout various
industries, and mining operations are not unique enough to warrant a change in these routines at
this time. It will be assumed that the routines using the meteorological data are inherently
correct.
A third weather factor, rainfall, has an affect on the concentration of PM in the
10
atmosphere. Water causes dust to coagulate and become heavier particles that cannot be
transported by wind. Like rainfall, humidity can also lead to the coagulation of dust, though not
as great as the direct application of water (Hesketh and Cross, 1983). As a result, areas of the
United States that have a large number of days with rainfall will have less wind erosion rates
than areas with low rainfall amounts (Hesketh and Cross, 1983). Rainfall will not be addressed
in this research in order to simplify the dispersion modeling process.
3.1.2 Terrain Effects
Topographic conditions have a great impact on the modeling of the dispersion of PM
10
because topography helps create wind patterns that transport PM . Topographical effects, such
10
as mountain passes and valleys, can amplify the wind. Studies have been completed which have
shown the effects of topography on particulate deposition. One such study by Goossens and
Offer demonstrated that dust tends to deposit on the windward side of slopes or hillsides rather
than the leeward side as previously thought (Goossens and Offer, 1990). Various factors can
affect the deposition of dust on the windward slopes and these factors are based upon the
condition of the slopes. For example, if the leeward side of the slope has more vegetation, a
63
|
Virginia Tech
|
higher soil moisture content, or higher moisture [dew on vegatation] than the windward side,
then the leeward side might collect more dust than the windward slope (Goossens and Offer,
1990).
In urban areas, buildings are considered part of the local topography. Studies completed
on the aerodynamics of buildings demonstrate that dust can become trapped in low-pressure
cavities on the lee side of buildings (Schnelle and Dey, 2000). This trapping effect will have a
significant impact on the long-range dispersion of pollutants because the portion of the pollutant
trapped on the leeward side of a building will not be available for dispersion or deposition further
downwind.
The terrain data will also vary tremendously from site-to-site, causing differences in
results from the ISC3 model. There are modeling routines that handle the topographical and
vegetation effects. These modeling routines are used consistently throughout various industries
in a manner similar to the meteorological data. Therefore, any possible errors from the terrain
modeling routines will be neglected at this time in order to simplify the dispersion modeling
process.
Vegetation would also have an effect on dust dispersion. Studies have been conducted on
the effects of dust on vegetation, but few of them have been completed on the effects of
vegetation on dust transport. It has been shown that vegetation creates a greater surface area on
which PM can deposit (Farmer, 1993). However, it is not known whether the vegetation attracts
the dust or if the dust is simply depositing in the surrounding area that contains vegetation.
3.1.3 Material Properties
The physical properties of size, density, and shape of the particulates have a great impact
on their transport. Particles in the size range of 100 :m - 30 :m can be windblown and create a
nuisance. Particles <30 :m can be suspended and cause nuisance problems. Particles <15 :m
are inhalable and, with densities <2.5 grams/cubic centimeters (g/cm3), can be transported long
distances. Particles <2.5 :m are respirable and can be transported long distances (Hesketh and
Cross, 1983). Smaller particles remain airborne longer and deposit more slowly than larger
particle sizes. This is due to the higher terminal settling velocities of larger particles. For
example, a 10 :m particle has a terminal settling velocity of 0.305 cm/s while a 1 :m particle
has a terminal settling velocity of 0.0035 cm/sec (Lippman, Chapter 13 Filters and Filter
64
|
Virginia Tech
|
U.S. EPA states that these published factors are neither standards nor recommended values (U. S.
EPA, AP-42, Western Surface Coal Mining, 1998). If there are better methods available for
estimating emissions, then these methods can be used over the emission factors. However, these
methods must be proven to be more accurate which can be costly and time consuming.
Therefore, the emission factors are more readily accepted than other methods of estimating
emissions from facilities.
The emission factors, which determine the amount of emissions from a source, are input
into the model. A study by Cole and Zapert states that the over-estimation of emissions from a
source can cause the modeling process to over-estimate its results (Cole and Zapert, 1995).
Since the emissions factors generally provide the basis for all modeling exercises. a review of the
emissions factors is conducted.
3.2 Review of Surface Mining Emission Factors
Cole and Zapert(cid:146)s study demonstrated that the major activities contributing to PM
10
emissions were truck hauling on unpaved roads and loading of stockpiles and trucks. Table 3.1
shows the distribution of PM emissions from three actual quarry operations used in Cole and
10
Zapert(cid:146)s study (Cole and Zapert, 1995). These emissions were calculated using the emission
factors from the U. S. EPA(cid:146)s AP-42.
Calculating PM emissions on a theoretical quarry by using the emission factors from
10
U.S. EPA(cid:146)s AP-42 gave the results shown in Table 3.2. These results agree well with the results
from the Cole and Zapert study. The assumptions made for the calculated results in Table 3.2
are
1. The theoretical quarry had a production rate of approximately 900,000 metric tons
per year. This production rate was sustained by operating the quarry eight hours
per day, 250 days per year.
2. The material was assumed to have a silt content of 10%, a moisture content of
1.0%, and a specific weight of 2.37 metric tons per cubic meter.
3. The theoretical quarry contained a crushing operation with eight final product
stockpiles located nearby. These stockpiles are also the load out areas for loading
over-the-road trucks, which are used to transport the product to its final
destination.
66
|
Virginia Tech
|
4. The theoretical quarry used 1 drill, 2 in-pit loaders, 4 haul trucks, 1 grader, 1
bulldozer 2 pickups, and 200 over-the-road trucks hauling final product from the
site per day.
5. The processing plant used 1 primary crusher, 1 secondary crusher, 1 terteriary
crusher, 2 main screens, and 1 fines screen. The processing plant also contained
18 conveyor transfer points and 8 final product stockpiles.
6. The haul trucks, over-the-road trucks, and the grader required the number of miles
traveled per year on the site. A total number of 12,767 kilometers (km) per year
was used for each haul truck, 826 km per year was used for each over-the-road
truck, and 6,439 km per year was used for the grader. No adjustment was made
for watering of any haulroads or using any dust suppression chemicals.
7. The bulldozer emissions factor was based upon 2,000 hours of operation per year.
8. The average wind speed was assumed at 3.5 m/s. This wind speed was used in
the emission factor equation for loading trucks and loading stockpiles.
9. No results were calculated for blasting since the emissions factor concerns coal
operations only and the emissions factor is not to be used for quarrying operations
(United States Environmental Protection Agency, AP-42, 1995).
10. Stockpile wind erosion calculations were completed and found to be negligible in
this particular case.
11. In all emission factor calculations, no corrections or adjustments were made for
any occurrences of precipitation during the year.
Since the majority of the PM emissions come from the hauling operations at a quarry, it
10
follows that the haul truck emissions should be evaluated. If the emissions from the haul trucks
are being over-predicted, then lowering the estimate of the amount of emissions from haul trucks
should cause the model to become more accurate.
3.3 Testing the ISC3 Model with Varying Emissions
A study was completed to review the ISC3 model by varying the emissions of a
theoretical quarry (Reed, et. al., An Improved Model, 2000). In this study a theoretical quarry
layout was used. Source emissions were calculated for this quarry layout using the assumptions
69
|
Virginia Tech
|
from the previous section. All this information was entered into the ISC3 program. The
program was run and the results recorded. Four runs were completed, with changes made only to
the emissions from the quarry as follows: 50% of total emissions, 100% of total emissions, 150%
of total emissions, and 200% of total emissions. Figure 3.1 shows the quarry layout, and Table
3.3 lists the program input sources and their corresponding emissions. All sources were input as
area sources except the haul truck road and the over-the road (OTR) truck road; these two were
input as line sources. Open pit sources were input as area sources on the ground surface. The
weather data used in the study was 1991 Roanoke weather data. The ISC-AEROMOD View
created by Lakes Environmental Software was used to facilitate the use of the ISC3 model. This
program uses the ISC3 model in its original form for dispersion modeling, but it creates a
Windows interface to the model, which facilitates inputting data and running the model (ThØ, et.
al., Vol. I, 2000).
The results of the ISC-AEROMOD View program produced a contour plot of the PM
10
concentrations surrounding the quarry, with the contours representing the concentration of PM
10
in grams/cubic meter (g/m3). Two sample plots for 100% emissions data, one for the fourth
highest 24-hour concentration and one for the annual average concentration, are shown in
Figures 3.2 and 3.3, respectively. The first plot shown in Figure 3.2 produces contours of the
fourth highest 24-hour concentration that was calculated during the year. The second plot shown
in Figure 3.3 produces contours of the annual average of the daily concentrations. The annual
average concentration results were used because the purpose was to compare average annual
results, not the 4th highest results.
Both Figures 3.2 and 3.3 contain cross-section lines, where the contour results are
graphed as PM concentration versus the (cid:147)X(cid:148) coordinates for the east-west cross-section or (cid:147)Y(cid:148)
10
coordinates for the north-south cross-section. The graphs for the annual average PM
10
concentrations along the east-west and north-south cross-section lines are shown in Figures 3.4
and 3.5, respectively. These graphs are intended to show the calculated concentration of PM as
10
a function of distance. The sources of these modeled emissions results that intersect the cross-
section line are located at the highest points of the graphs.
Figures 3.4 and 3.5 also show that as the emissions were increased or decreased by a
constant amount, the dispersion modeling results also increased or decreased by a similar
70
|
Virginia Tech
|
amount. For example, when the 100% emission was doubled to 200% emissions, then the
dispersion modeling results also doubled. However, a review of the results from the study
conducted by Cole and Zapert, showed that the differences between modeled and measured
concentrations were not consistent (Cole and Zapert, 1995). This is shown by calculating the
average and standard deviation of the difference between the modeled and measured
concentrations, as shown in Table 3.4. The standard deviations calculated were almost as great
as the calculated average, which demonstrates a lack of consistency in the difference between
modeled and measured concentrations.
Further calculations were completed on the Cole and Zapert data by reducing the
modeled concentrations by 50% and calculating the average and standard deviation on the
difference between the modeled and measured concentrations for these reduced concentrations.
These results are shown in Table 3.5. These calculations resulted in the averages being closer to
zero, demonstrating improved accuracy of the model. But the standard deviations, although
smaller than previously calculated, were still very large compared to the averages. This
comparison demonstrates that this improved accuracy was achieved by the model over-
predicting and under-predicting in an inconsistent manner. Therefore, while the accuracy may
be improved, the precision of the ISC3 model will not be improved by decreasing the emissions
input into the model or by revising the emissions factors for unpaved roads, loading trucks, and
loading stockpiles. The ISC3 model, itself, needs to be modified to improve its correctness.
3.4 The Cause of the Inconsistency in Modeled Vs. Measured Results
Assumptions of the ISC3 model are that the emission source is stationary with a constant
emission rate and that the model calculates the concentration of a pollutant at a stationary
receptor. The ISC3 model is generally used for point sources such as stacks (U.S. EPA, User(cid:146)s
Guide Vol. II, 1995). However, a mining facility has a different scenario. All measurements for
PM emissions from a mining facility are completed at stationary points, but the majority of the
10
PM emissions sources at mining facilities are moving sources (haul trucks and loading). This
10
has been demonstrated through emissions calculations completed for actual and theoretical
facilities.
72
|
Virginia Tech
|
The over-prediction of PM concentrations by the ISC3 model may occur because the
10
model applies the total emissions of the mobile sources to a specific area source. This
application creates a constant uniform distribution of emissions over this specified area, as
shown in Figure 3.6. In real-life, the emissions from traffic or a mobile source are not uniform
(Micallef and Colls, 1999). Figure 3.7 shows how the emissions from a moving haul truck
actually occur. They act more like a moving point source rather than the continuous uniform
emission distribution that the ISC3 model uses. A moving point source is more representative
because the emissions occur abruptly as the emissions source approaches a point and then slowly
dissipate as the emissions source moves away from the point. At a mining facility, this moving
point source will move along a predictable path from the pit to the processing operations.
There have been studies of pollutant dispersion modeling along highways, and modeling
the emissions of haul trucks is similar to modeling emissions of highways. However, these
studies generally focus on pollutants other than PM , and they use other models besides the
10
ISC3 model. The emissions from traffic flow are also generally treated as line or volume
sources, which are basically modified area sources, and are modified depending upon the traffic
volumes and the scale of the modeling (Owen, et. al., 1999). Line or volume sources are used
because dispersion modeling systems, coupled with traffic flow models, do not currently exist
(Schmidt and Schafer, 1998). Mining facilities generally do not have a high traffic volume;
therefore, the line or volume sources are not adequate for representing the haul truck flow at
mining facilities.
In order to more accurately represent this moving point source in dispersion modeling, a
dynamic component representing haul truck emission sources shall be introduced into the ISC3
model to reduce the over-prediction of PM emissions for surface mining operations.
10
3.5 Proposed Correction of the ISC3 Model
The Dynamic Component is based upon the Gaussian dispersion equation for point
source emissions since this is the approach of the current ISC3 model. There are other modeling
equations available for estimating dispersion such as the Box dispersion model, the Eularian
dispersion model, and the Lagrangian dispersion model. The Box model is the simplest
modeling method, but it applies the emissions to an area creating a uniform distribution of
78
|
Virginia Tech
|
emissions over that area (Collett and Oduyemi, 1997). This is not representative of the mobile
sources; therefore, the Box modeling method will not be used. The Eularian and Lagrangian
dispersion models use detailed information of dispersion parameters, such as rate of advection,
rate of turbulent diffusion, and rate of molecular diffusion of pollutants, to predict dispersion
(Collett and Oduyemi, 1997). These modeling methods are more detailed than required at this
time therefore, they will not be used at this time but they may represent areas of future research.
3.5.1 Dynamic Component Equations
The main dispersion equation for the Dynamic Component model is the Gaussian
dispersion equation, which is represented by the following equation:
2
χ= QKVD exp−0.5 y (3.1)
2πu σ σ σ
s y z y
where
P = hourly concentration at downwind distance x in :g/m3.
Q = pollutant emission rate in grams/second (g/s).
K = conversion factor 1x106 for P in :g/m3 and Q in g/s.
V = vertical term.
D = decay factor, a default value of 1 if decay of pollutant is unknown.
u = mean wind speed at emissions release height in m/s.
s
F = standard deviation of lateral concentration distribution in m.
y
F = standard deviation of vertical concentration distribution in m.
z
The decay term D is assumed to be one. The vertical term V is calculated using the
mechanical mixing height, the stack height or emission height, and the receptor height. However,
because the emission height and the receptor height for haul trucks are nearly equal, and the
emissions of the haul truck will never be above the mechanical mixing height, V can be
eliminated from Equation (3.1).
The u term is an adjustment of the wind speed using the measurement height and the
s
emission height. Since the emission height will never be above the measurement height, the
actual wind speed was used. These changes resulted in the following new dynamic component
equation:
81
|
Virginia Tech
|
where
x = downwind distance in m.
X(R) = x coordinate of the receptor in m.
X(S) = x coordinate of the source in m.
Y(R) = y coordinate of the receptor in m.
Y(S) = y coordinate of the source in m.
WD = north azimuth of wind direction in degrees.
The other term F has the following equation:
z
σ =axb (3.6)
z
where
F = standard deviation of vertical concentration distribution in m.
z
x = downwind distance in km.
a,b = constants defined by Pasquil-Gifford stability categories and the
downwind distance.
The y term used in Equation (3.2) is given by the following equation:
( ( ) ( )) ( ) ( ( ) ( )) ( )
y = X R − X S cosWD − Y R −Y S sin WD (3.7)
where
y = crosswind distance in m.
X(R) = x coordinate of the receptor in m.
X(S) = x coordinate of the source in m.
Y(R) = y coordinate of the receptor in m.
Y(S) = y coordinate of the source in m.
WD = north azimuth of wind direction in degrees.
All of the previous terms and equations are the same equations used by the USEPA in the ISC3
model (U.S. EPA, User(cid:146)s Guide Vol. II, 1995). The only equation that differs is Equation (3.2)
where V has been removed and where u becomes the actual wind speed instead of an adjusted
s
wind speed.
83
|
Virginia Tech
|
The emission rate Q for PM is calculated for the haul trucks using the emissions factor
10
for haul trucks published in U. S. EPA(cid:146)s AP-42 (U.S. EPA, AP-42, Unpaved Road, 1998). This
emission factor equation is represented by the following equation:
( ) ( )
s 0.8 W 0.4
2.6
Q= (12 ) 3 (3.8)
M 0.3
0.2
where
Q = emissions from haul truck in pounds/vehicle mile traveled
(lb/vmt).
s = surface material silt content in %.
W = mean vehicle weight in tons.
M = surface material moisture content in %.
Equation (3.8) uses English units instead of SI units. In order to use Q, calculated from equation
(3.8), Q had to be converted from lb/vmt to g/s. This was accomplished by converting lb/vmt to
grams per vehicle meters traveled, then multiplying it by the speed of the haul truck in m/s.
3.5.3 Dynamic Component Algorithm
The procedure for calculating dispersion of PM for mobile sources by the new model is
10
described as follows: The processed hourly meteorological data will be read into the program
and used in the ISC3 model for use in the Equation (3.2). Each hourly meteorological data point
will be used at each (cid:147)X(cid:148) and (cid:147)Y(cid:148) coordinate of the source and receptor as described later.
Haul road information will be input into the program representing the possible locations
of the source (haul truck). Receptor coordinates will be input representing the desired locations
for results of the modeling exercise. Each source and receptor will have an (cid:147)X(cid:148) and a (cid:147)Y(cid:148)
coordinate. These (cid:147)X(cid:148) and (cid:147)Y(cid:148) coordinates will be input into the downwind and crosswind
distance Equations (3.5) and (3.7), respectively, to calculate the downwind distance x and
crosswind distance y, which are then input into Equation (3.2), the Gaussian equation.
Receptor array variables will be created which are representative of the locations of PM
10
monitors at a mine site. These variables will contain the PM concentrations calculated by
10
Equation (3.2).
84
|
Virginia Tech
|
The first step of the program is to input all the data required for the calculations. The data
include receptor coordinates, haul truck pathway coordinates, weather data for the entire year,
and all other information required by the emission factor equation.
Figure 3.10 shows the window displayed when the program is started. Once the program
is started, information is requested via interactive menus. The first input is the haul road
information. Figure 3.11 shows the series of window displays for inputting the haul road
information. An equation characterizing the haul road in Cartesian coordinates must be known.
The equation is in the form:
y =mx+b (3.9)
where
y = Y coordinate of the line representing the haul road.
x = X coordinate of the line representing the haul road.
m = slope of the line representing the haul road.
b = Y-intercept of the line representing the haul road.
Once entered, the haul truck pathway coordinates are calculated from the straight line between
the starting and ending coordinates that are input into the program.
The receptor coordinates are read in from an ASCII file. The program will automatically
ask for the location of the file, which contains the receptor coordinates. Figure 3.12 shows the
window displays for the receptor information.
Once the receptor information is read-in by the program, it will display a series of
interactive menus to obtain information concerning the haul road material and the haul trucks
using the road. Figure 3.13 presents the window displays for obtaining this information. The
program will then calculate the PM emissions for the haul truck using the information obtained
10
and the U.S. EPA(cid:146)s emission factor equation for unpaved roads from AP-42. The results are
displayed as shown in Figure 3.14. The program will then ask for the height of the receptor to
complete the first stage of inputs.
88
|
Virginia Tech
|
haul road, amount of weather data, and the number of receptors. Files with sizes up to 29
Megabytes are common. Figure 3.18 presents the windows that are displayed to create the array
data file.
To calculate the final PM concentration for each receptor, the program (cid:147)Dynamic
10
Manipulation(cid:148) must be used. Memory constraints on the PC required that the process of
calculating these final concentrations be split into two programs: Dynamic Component ISC3,
previously described, that creates the array file, and Dynamic Manipulation that calculates the
final PM concentrations.
10
Dynamic Manipulation requires input of the two data files created by the Dynamic
Component ISC3 program. These two data files are the input data file and the array variable file.
At this point in the modeling exercise, the frequency of the haul trucks traveling the haul road
must be known. Figure 3.19 shows the window displays for entering the input data file. Figure
3.20 shows the window displays for entering the array variable file.
The next step is to calculate the final PM concentrations for each receptor. This is
10
completed using the following equation:
cD cD
y ( ) + BEt− ( )
S S
60 60
X = (3.10)
t
where
X = final PM concentration for receptor after correction for
10
background emission level in (cid:181)g/m3.
y = PM concentration for receptor before correction for background
10
emission level in (cid:181)g/m3.
c = frequency of trucks traveling the haul road (number of truck
passes).
D = distance of haul road segment in meters.
S = speed of haul truck in m/sec.
BE = PM concentration background emission level for site in (cid:181)g/m3.
10
97
|
Virginia Tech
|
The list of corrected PM concentrations are averaged for each receptor. These averages are
10
then corrected using the PM concentration background emissions level for the site. This
10
practice results in a more conservative result because the correction has the effect of raising the
receptor concentrations. Figure 3.21 shows the window display for calculating the final PM
10
concentrations.
Once the final PM concentrations are calculated, the results are output to the default
10
printer. There is no method to display the results to the video screen unless Adobe Acrobat is set
as the default printer. Figure 3.22 shows the window for printing out the results. Figure 3.23
shows the printout of results using the data from the August 2, 2002 field study. The printout
presents the input data along with the results for the modeling exercise. On the first page the
PM concentration results for each receptor are uncorrected for PM concentration background
10 10
emission levels. The time-weighted-average PM concentration results, shown under page two
10
of the printout, are corrected using the PM concentration background emission levels for the
10
site.
3.7 Comparison of Dynamic Component Program and the ISC3 Program
To test the Dynamic Component Program, a test situation was created and run in
both the Dynamic Component Program and the ISC3 Program. The results were then compared
to determine whether the Dynamic Component Program was an improvement over the ISC3
Program. If the results of the Dynamic Component Program predict dust concentrations that are
lower than the ISC3 model, it will be considered an improvement over the ISC3 model.
The test situation consisted of a haul road and receptor layout, as shown in Figure 3.24.
The haul road shown in this figure represents the possible locations of the point source (haul
truck). Other information used in the test situation is listed in Table 3.6. The receptors were
located perpendicular to the haul road, with approximately 15 meters between the receptors.
Receptor coordinates were input by data file. The weather data used in the test situation was
downloaded from the U. S. EPA(cid:146)s website and converted using RAMMET View, a program
created by Lakes Environmental that emulates the U.S. EPA(cid:146)s PCRAMMET program (ThØ, et.
al., Vol II, 2000). The data were from the Pittsburgh Greater International Airport for 1990.
These data were also input by data file.
102
|
Virginia Tech
|
It should also be noted that PM concentrations are highly dependent upon wind
10
direction. A wind rose diagram of the Pittsburgh weather data, used in this situation and
produced by RAMMET View, is shown in Figure 3.26. The wind rose diagram shows the wind
directions coming predominantly from the southwest. However, the wind also came from the
northwest and the southeast at approximately equal frequencies, resulting in PM concentrations
10
on both sides of the haul truck road.
The results of the Dynamic Component Program show a promising improvement over the
ISC3 Program. This improvement has been achieved through a different method of modeling
emissions from a haul truck: modeling mobile sources as incrementally moving sources, rather
than modeling mobile sources as stationary sources. This methodology, while a promising
improvement, requires further testing in order to verify the results as realistic.
In order to determine the accuracy of the Dynamic Component Program, a comparison
will be made to actual data (presented in Chapter 5). Data were collected on haul trucks
operating at actual surface mining operations to validate this model. The field studies consisted
of sampling airborne PM from a haul truck on an unpaved surface.
10
3.8 Summary
The Dynamic Component Program uses the same equations as the ISC3 Program to
calculate PM concentrations. However, the methodology of calculating these concentrations
10
for mobile sources has been changed. Instead of dividing the emissions of the source over the
area of the mobile source path, the entire emissions from the haul truck are applied at points
along the path of the source. This results in an array of PM concentrations that are then
10
averaged for each receptor. This methodology has produced promising results. In the test
situation, the dynamic component results were 74-79% lower than the ISC3 results, agreeing
with prior research completed by Cole and Zapert in 1995.
In order to test the accuracy of the new Dynamic Component Program, field studies were
completed. These field studies test the dynamic component(cid:146)s accuracy and should support the
results of the new Dynamic Component Program. In addition, the field studies help enhance the
understanding of PM propagation from mobile sources.
10
109
|
Virginia Tech
|
Chapter 4 Field Studies
4.0 Field Study Design
A field study was designed to collect dust concentration information from haul trucks as
they traveled a haul road. Dust sampling was performed for the respirable, thoracic (or PM ),
10
and total fractions of dust. This information was used to determine the effects of haul trucks on
the surrounding areas and the resulting data were then compared to results from the ISC3 and the
Dynamic Component ISC3 programs.
4.1 Site Selection
The dust study was conducted at two surface mine sites. One is a stone quarry located in
Virginia that sells crushed limestone to the surrounding community. The other site is a coal
mine in Pennsylvania. This site is an underground coal mine with a coal preparation plant
located on site. Haul trucks are used to remove waste or reject material from the coal prep plant
to a waste pile.
Criteria for the selection of the testing location within the site include:
1. The haul road must be long, straight, and isolated from other operations of the
mine site.
2. The topography along this length of haul road should not have any major or
significant topographical features. Flat topography is preferable to rolling
topography, as it is easier to model because of fewer turbulent wind disturbances
(Schnelle and Dey, 2000).
3. The vegetation along the haul road should also not be significant. Grassland
would be acceptable. However, a haul road that is heavily forested on both sides
would not be acceptable.
4. The haul road should be constructed of material that originates from the mine site.
Roads constructed with aggregates brought in from other locations would be
acceptable.
5. The length of haul road should be untreated. No treatments of CaCl or MgCl
should have been completed for dust control (Wolf, 2001).
112
|
Virginia Tech
|
The majority of the criteria were met when site observations were completed; however,
no site is perfect. The stone quarry has the possibility of being cross-contaminated by dust from
other operations, as there is a stone processing plant and several stockpiles located near the
section of haul road used in the study. During an initial site visit, observations of the stone
processing plant were made. There were no noticeable dust plumes emanating from the plant.
During the actual study, however, the stone processing plant and surrounding stockpiles did
create visible dust plumes. The wind direction was also favorable, as it negated the effects of the
small uphill grade. The height of the surrounding stockpiles helped to keep the majority of other
fugitive dust sources, other than the stone processing plant, from contaminating the
measurements of the study. The stone quarry had a mix of haul trucks, ranging from 50 ton
trucks to OTR trucks, which were the majority of the traffic using this road. There were also
measurements made of other equipment using the road. This site had a high volume of traffic
with trucks arriving every 3 - 10 minutes.
The coal mine site had a better layout. All of the conditions were met, except that the
topography of the study area included a slight uphill grade. A processing plant was located on
the site, but it was several miles from the testing location. Therefore, the effects from the
processing plant would be less than at this location than the stone quarry location. This study
measured dust from 50-ton Caterpillar haul trucks, 40-ton Payhauler trucks, and 60-ton Euclid
trucks. The volume of traffic at the site was much lower than that of the stone quarry study with
trucks arriving every 10 - 20 minutes.
4.2 Selection of PM Sampling Layout
10
The goal of this study was to record measurements of dust concentrations generated from
haul trucks at several locations along the haul road. This data would be used to determine the
particle size distribution of airborne dust generated by the haul trucks, determine the decay of the
airborne dust concentrations generated by the haul trucks, and to determine the accuracy of the
new Dynamic Component Program to actual conditions. The desired results influenced the
determination of the locations of the sampling points. Site-specific factors also influenced the
locations of the sampling points, as the area available for sampling was limited in both cases.
There were several options available as shown in the following figures.
113
|
Virginia Tech
|
Figure 4.1 shows the original proposed sampling layout. This layout was proposed
because it was the layout used to conduct the preliminary Dynamic Component ISC3 model test
calculations. Because wind is the mechanism that moves dust particles, wind direction is a
dominant factor in dust propagation. Therefore, dust concentrations will be higher at the
downwind sampling stations than at the upwind sampling stations. This meant that the majority
of the sampling stations were placed down wind of the haul road, and that not all of the upwind
stations were needed. Therefore, a sampling layout shown in Figure 4.2 was proposed. It was
assumed that only one sampling station would be required upwind of the haul road to determine
the ambient PM concentrations in the air, since the majority of the PM concentrations from
10 10
the haul truck would travel downwind of the haul road following the wind direction.
It was thought that the sampling layouts should be placed parallel to the haul road in
order to characterize the highs and lows of the PM concentrations along the haul road as being
10
representative of the emissions from the haul trucks. However, the Dynamic Component
Program only minimally changed the equation used to determine the decay of the airborne PM .
10
The program(cid:146)s major change was the methodology of applying the equation to the source. This
led to a sampling layout, shown in Figure 4.3, as being a possible layout to be used during the
field study.
A preliminary field trip was made to the stone quarry to inspect the layout of the site and
to test some of the dust sampling equipment to determine the magnitude of respirable dust
concentrations that could be expected from the haul trucks. A preliminary layout with two
sampling stations, shown in Figure 4.4, was used to test an MIE personal data RAM connected to
a 10 mm Dorr-Oliver cyclone to measure instantaneous respirable dust, and a Cascade Impactor
at each receptor location. The data from this preliminary study revealed that the instantaneous
data display the high and low spikes in the respirable dust concentration from the haul road as
the haul truck travels through the area. Figure 4.5 shows the spikes of the instantaneous
respirable dust data. The results from the preliminary study basically eliminated the need for the
sampling layout parallel to the haul road, as shown in Figure 4.3, since one sampling location
can show that haul road emissions from haul trucks are not constant but vary over time.
114
|
Virginia Tech
|
The selected layout, shown in Figure 4.6, was a compromise between the layouts shown
in Figures 4.2 and Figure 4.3. Prior research conducted on overburden drilling operations at
surface coal mines, showed that at distances of 30 meters or more the amount of respirable dust
in the air was minimal. Most of the decay of the concentrations of airborne respirable dust
occurred within the range of 10 - 30 meters (Page and Maksimovic, 1987). If the respirable dust
decays rapidly, as in the drilling study, then the PM fraction of dust should decay as rapidly.
10
Prior research has shown that the material density not particle size affects deposition of dust
therefore, the sampling locations that are further away from the haul road as shown in Figure 4.2
should be unnecessary.
Two parallel lines perpendicular to the haul road and 30 meters apart from each other
were selected. The three sampling stations in each line perpendicular to the haul road are set 15
meters away from the next sampling station, resulting in a total distance of 30 meters away from
the haul road. This distance was chosen because of the previously mentioned study, and the
parallel lines were chosen because it was thought that this layout would allow for better
determination of the decay of dust from the haul truck in differing wind directions. A particular
example would be when the wind direction was close to being parallel to the haul road direction.
In this case only the first sampling stations that are parallel along the haul road may collect data,
but the second station in the second perpendicular sampling line may collect data missed by the
second sampling station in the first perpendicular sampling line. Therefore, if the wind direction
was such that the airborne dust bypassed the first perpendicular line, resulting in no data, it was
thought that the second perpendicular line would be located in a position to be able to collect
some data.
119
|
Virginia Tech
|
4.3 Selection of Sampling Equipment
The ability to obtain instantaneous dust concentration data to validate the dynamic model
was important. The instantaneous dust concentration data can be used to characterize the dust
emissions from the haul trucks. This will be important in any future considerations of modeling
dust dispersion from these sources. Normally time-weighted-average concentrations are
obtained when dust concentration measurements are conducted. Time-weighted-averages
smooth the concentrations over a specific time period. During this time period, the instantaneous
dust concentrations may vary a significant amount, as can be seen in Figure 4.5. The variance of
dust concentrations may contain a pattern that may be specific to a particular source. Identifying
these patterns is important in the characterization of the dust emissions from the source because
modeling attempts to reproduce these patterns. When measuring or recording time-weighted-
average dust concentrations, this variance is eliminated. It is impossible for time-weighted-
average dust concentrations to derive the instantaneous dust concentrations. However, the
instantaneous dust concentrations can be used to derive the time-weighted-average dust
concentrations (Boubel, et. al., 1994).
The equipment normally used to measure PM from facilities is the high-volume
10
sampler. Examples of such samplers are the GMW PM High Volume Sampler built by
10
Andersen Instruments, Inc.; the Model HVP-2000 and HVP-3000 built by Hi-Q Environmental
Products Co.; and the PM Critical Flow High-Volume sampler built by Wedding and
10
Associates, Inc. These samplers use a high flow rate of approximately 1.1 m3/minute to conduct
ambient air sampling of PM, and they generally meet the requirements of the U. S. EPA
reference method for sampling ambient PM in the atmosphere (Rubow, 1995). The
10
disadvantage to this equipment is that it is very expensive, costing approximately $5,000 -
$10,000.00 per sampler, and the equipment does not have the ability to record instantaneous
PM concentrations.
10
The Mini-Vol built by Airmetrics, Inc. is a more affordable PM sampler costing
10
approximately $2,000.00 per sampler, but it does not meet the U. S. EPA reference method for
sampling ambient PM in the atmosphere, and it also is not able to record instantaneous PM
10 10
concentrations. MIE, Inc. makes a personal data RAM that has the ability to record
122
|
Virginia Tech
|
instantaneous dust concentrations. This equipment has a particle size range of 0.1 (cid:150) 10 (cid:181)m.
However, it is best suited for measuring respirable dust (MIE, 2000). The method for
determining thoracic and total instantaneous concentrations was to calculate ratios from the
gravimetric time-weighted-average measurements. These ratios were applied to each respirable
instantaneous data point. An assumption must be made in order to project the thoracic and total
instantaneous concentrations. This assumption is that the total and thoracic instantaneous
concentrations will be consistent with the instantaneous respirable concentrations and that the
total and thoracic instantaneous concentrations will follow the general trend of the buildup and
decay of these respirable concentrations.
4.3.1 Dust Sampling Equipment Required
The equipment used in this study consisted of weather monitoring equipment, and dust
sampling equipment. The weather monitoring equipment consisted of a barometer for
determining atmospheric pressure, a sling pyschrometer for determining relative humidity, and a
data-recording weather station, shown in Figure 4.7 that records wind speed and wind direction
at 30-second time intervals. This equipment will be used to gather the weather data that are vital
for conducting the modeling process. The dust sampling equipment will consist of personal data
RAM(cid:146)s and MSA Escort ELF personal sampling pumps. Both gravimetric and instantaneous
data will be measured in order to obtain a particle size distribution from the haul trucks. The
following is a list of the equipment quantities that were used in recording the respirable, thoracic,
and total dust concentrations:
7 MIE Personal data RAM model pDR-1000 samplers with 10mm Dorr-Oliver
cyclones, (PDR).
7 MSA Escort ELF personal dust sampling pumps with 10mm Dorr-Oliver
cyclones, (respirable sampler).
7 MSA Escort ELF personal dust sampling pumps with BGI GK2.69 cyclones,
(thoracic sampler).
7 MSA Escort ELF personal dust sampling pumps without any cyclones, (total
sampler).
3 Cascade Impactors, for obtaining particle size distribution of dust at certain
locations.
124
|
Virginia Tech
|
The MSA Escort ELF personal dust sampling pumps were used to collect dust samples on 37
mm filters. This would allow for the calculation of time-weighted-average dust concentrations in
mg/m3. Since the sampling layout in Figure 4.6 contained seven sampling stations, seven of each
type of dust sampler was used. One type of sampler was placed at each station in the layout.
4.3.2 Description of Dust Sampling Equipment
The MIE Personal Data RAM model pDR-1000 sampler (PDR), fitted with the 10mm
Dorr-Oliver cyclone shown in Figure 4.8, was used with a personal dust sampling pump to
obtain the instantaneous respirable dust measurements in mg/m3. The sampling airflow rate for
the 10mm Dorr-Oliver cyclone was set for 1.7 liters/minute to allow for measuring respirable
dust according to ACGIH(cid:146)s respirable dust standard (Bartley, et. al., 1994). This sampler
collected respirable dust on a 37 mm filter attached to the cyclone, which allowed calibration of
the instantaneous data. The PDR contains an auto-logger; therefore, the dust concentration
readings were recorded continuously. The auto-logger was set to record measurements every 2
seconds. This allowed for the proper presentation of instantaneous respirable dust concentration
data. Any longer time interval might have missed the dust concentration peaks caused by the
haul trucks.
Seven Escort ELF sampling pumps were fitted with the 10mm Dorr-Oliver cyclones in
order to measure the respirable dust and to double check the respirable fraction measured by the
PDRs. An example of the respirable sampler is shown in Figure 4.9. The sampling airflow rates
for the respirable sampler were also set for 1.7 liters/minute.
In order to obtain data for the PM size fraction, an MSA personal dust sampling pump
10
connected to BGI GK2.69 cyclone with an attached 37 mm filter, shown in Figure 4.10, was
used to obtain the time-weighted average for the thoracic size fraction. The thoracic size fraction
is not exactly the same as the PM size fraction as explained in Chapter 2. The thoracic size
10
fraction will contain some larger particles because the curve showing the mass fraction of
material versus particle size is not as steep for the thoracic as it is for the PM curve (refer to
10
Figure 2.8). As a result, the sampled thoracic concentrations should represent a more
conservative measurement for PM . Again, seven thoracic samplers were used in the study, one
10
located at each sampling station. The sampling airflow rates for the thoracic samplers were set
for 1.6 liters/minute in order to collect the thoracic fraction of dust (Maynard, 1999).
129
|
Virginia Tech
|
1995). This information was then used to calculate the average speed of each piece of equipment
passing the dust sampling equipment.
Sampling of loose material on the haul road was completed to obtain material density, silt
content, and moisture content (U.S. EPA, Phase I, December 1995). One sample was taken per
day and the procedure for sampling the loose material from the haul road followed the methods
listed in appendix C of the U.S. EPA(cid:146)s AP-42 (U.S. EPA, AP-42).
The haul road sample taken during the field study was analyzed for moisture content, silt
content, and density. The samples were sent to a laboratory where the analysis according to
ASTM (American Society for Testing and Materials) standards was conducted. The moisture
content was determined using the D-2216 ASTM standard titled (cid:147)Standard Test for Laboratory
Determination of Water (Moisture) Content of Soil and Rock by Mass(cid:148) (ASTM, Vol. 04.08,
Moisture, 2001).
Silt is defined as the amount of material < 75 :m (ASTM, Vol. 04.08, Classification,
2001). The method of determining the silt content followed the C-117 ASTM standard titled
(cid:147)Standard Test Method for Materials Finer than 75 :m (No. 200) Sieve in Mineral Aggregates
by Washing(cid:148) (ASTM, Vol. 04.02, Sieve, 2001).
Density of an in-place material can be determined rather easily using ASTM standard D-
1556 titled (cid:147)Standard Test Method for Density and Unit Weight of Soil in Place by the Sand
Cone Method.(cid:148) The samples obtained were not (cid:147)in-place(cid:148) samples and required the
determination of material density using a different method. The density of this material was
determined using ASTM standard D-854 titled (cid:147)Standard Test Method for Specific Gravity of
Soil Solids by Water Pycnometer(cid:148) (ASTM, Vol. 04.08, Specific Gravity, 2001).
4.5 Field Study Procedure
The dust sampling at a site lasted three days during each field study. The sampling
period for each day was approximately 6 - 7 hours. The field study was conducted according to
the following methodology:
1. Prior to conducting the field study, all filters used in the study for the MSA
respirable, MSA thoracic, MSA total, MIE Personal Data RAM, and the Cascade
Impactors were prepared and weighed.
132
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.