University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Virginia Tech | According to Eq. (37), one can determine the value of A if A is known. One can
t b
calculate the values of A by considering the geometrical relationships between bubbles, lamellar
b
films, and Plateau borders as follows [33, 34],
15
A
b
0 . 4
0
. 8 9 2 d
2 ,b
'b
(41)
where d is the bubble size at the base of a foam (or froth), which may be considered the same
2,b
as the bubble size in the pulp phase; and ε ’ is the liquid fraction at the base of the foam under
b
consideration. One can then determine d using a bubble generation model [22] and obtain ε
2,b b’
using a drift-flux model[35].
In calculating the bubble size ratio using Eq. (40), it is necessary to know the value of A .
cr
In this study, the values of A were calculated from those of H based on the geometric relation
cr cr
(Figure 2.3) between A and H [9], as follows,
cr cr
A
cr
( 3
2
) R
p b
2 3 R
p b
H
cr
(42)
where R is the radius of curvature of PB.
pb
Figure 2.3: Plateau boarder area (A) in relation to critical lamella film thickness (H ),
cr
bubble size (R ), and plateau boarder radius (R ) in a dry foam.
2 pb |
Virginia Tech | 2.2.2 Bubble Coarsening Froth Model
In this section, a froth model evaluating bubble coalescence is presented, which is also
developed from first principles by Park and Yoon [36]. The model considers the bubble size ratio
as functions of particle size (d ), particle hydrophobicity (contact angle), froth height (h),
1 f
aeration rate (V ) and surface tension (γ).
g
It is known that bubble coalescence occurs at the point when a thin liquid film (TLF)
between two air bubbles breaks in the froth phase. The rate of film thinning can be analyzed
using the Reynolds lubrication Equation [37],
dH 2H3p
(43)
dt 3R2
d film
where H is the TLF thickness, t is drainage time, μ is dynamic viscosity, R is film radius and
d film
p is the driving force for the film thinning. The driving force for film thinning p is determined by
the following relation,
16
p p
c
(44)
which shows that the driving force is the sum of capillary pressure (p ) and disjoining pressure
c
(Π).
In the initial stage of film thinning, p governs the drainage rate of a TLF, which can be
c
calculated using,
p
c
2
r
2
(45)
where γ is surface tension of water and r is bubble radius.
2
However, surface forces and disjoining pressure (Π) between air/water interfaces start to
have major effect on the thinning rate when the thickness of TLF becomes about 200 nm. One
can use extended DLVO theory [38] to determine Π,
el vw h p
6 4 C
el
R 'T t a n h 2
e
4 k
s
'T
e x p ( H )
6
A
2
H
3 23
6
K
2
H
3 23
(46)
where П is the disjoining pressure due to electrostatic force, П is the disjoining pressure due
el vw
to the van der Waals dispersion force, П is the disjoining pressure due to hydrophobic force, C
hp el
is the electrolyte concentration, R’ the gas constant, T the absolute temperature, e the electronic
charge (1.6 ×10-19 C), ψ the surface potential at the air/water interfaces, k’ the Boltzmann’s
s
constant, κ the reciprocal of Debye length, A the Hamaker constant between two bubbles in a
232
medium, and K the hydrophobic constant.
232 |
Virginia Tech | When H reaches the critical rupture thickness (H ), TLF ruptures and hence bubble
cr
coalesces. Vrij et al. derived a predictive model for H based on capillary wave theory [39-41],
cr
17
1 4 4
3 H m 4
H
H H m
2
2 H 3c
r
H
H H m
3
2
H
2
H H m
3
2
H
p
m
c
3 R
2film
0 (47)
where H =0.845H and H is the median thickness of a TLF. The H model shown above is a
cr m m cr
first principle model, which was further improved by Park and Yoon [36] to take the disjoining
pressure contributed by hydrophobic force (П ) into consideration. Eq. (47) can predict the
hp
critical rupture thickness (H ) in different solutions. The H values predicted from the model can
cr cr
be used to predict the PB area (A ) using Eq. (42), which in turn is used to predict the bubble
cr
coarsening or the ratio (d /d ) of bubbles between the top and bottom of a foam using Eq. (40).
2,t 2,b
Since the presence of particles in the froth will cause the local curvatures to change, in
turn p will change from that of free films (2γ/r ). The presence of particles will also change
c 2
disjoining pressure (Π) and hence the driving force (p). The detailed calculation of new driving
force p due to the presence of particles can be found in the Park’s dissertation [36].
According to Reynolds’ Equation, one can use numeric methods to generate the plot
showing the relationship between film thickness (H) and drainage time (t ). Since the H is an
d cr
output from the critical rupture thickness model, one can determine critical rupture time, t by
cr,
checking the H vs t plot. Then the bubble size ratio can be calculated using the following
d
equation [36],
d
d
2
2
,b
,t
e x p
2 l n
3
2
V
h
g
f
t
c r
N
ru p tu re
(48)
where h is froth height, V is superficial gas rate, t is critical rupture time and N is a fitting
f g cr rupture
parameter, which represents the number of films that rupture on one bubble. In the simulation,
N ranges from 1 to 12 because each bubble is assumed to have dodecahedron structure (12
rupture
faces). |
Virginia Tech | Figure 2.4 shows the effect of particle hydrophobicity on the froth stability. When the
particle contact angle is below 70o, the bubble size ratio becomes smaller with the increase of
particle hydrophobicity, which means that the froth stability increases with the particle
hydrophobicity. However, the further increase of the particle hydrophobicity will
catastrophically destroy the froth. As shown, the 85o contact angle has the largest bubble size
ratio in this figure, which represents the froth is the most unstable among all these cases.
Figure 2.4: The effect of particle hydrophobicity on the bubble size ratio (d /d ). Park,
2,t 2,b
S., Modeling Bubble Coarsening in Froth Phase from First Principles. 2015,
Virginia Tech. Used under fair use, 2015.
Figure 2.5 shows the effect of particle size on the froth stability. As shown, bubble size
ratio increases significantly when the particle size becomes coarser from 11μm to 71μm, which
represents the stability of the froth decreases with the increment of particle size. Both simulation
and experiment results may demonstrate that fine particles have a better effect on stabilizing the
froth than coarse particles [42].
18 |
Virginia Tech | Figure 2.6 shows the simulation results of the bubble size ratio (d /d ) generated from
2,t 2,b
the galena flotation tests conducted by Welsby et. al. The operating parameters can be found in
the original thesis [6]. In this case, the adjustable parameter N is 8. As shown, the bubble
rupture
size ratio in the foam is larger than that in the froth. Increasing in particle size will lead to an
increase in bubble size ratio, which proves that the fine particle has better effects on stabilizing
the froth than the coarse particles. Furthermore, as one increase the contact angle from 51o to
70o, at a given particle size, there is a reduction in bubble size ratio in Figure 2.6, which means
increasing contact angle of particles in the froth may benefit the stability of froth when contact
angle is below 70o.
2.3 Behavior of Composite Particles
2.3.1 Predicting contact angles
An empirical liberation model has been developed to evaluate the contact angle of
mineral particles from mineral surface liberation data. At present, weighted geometric mean
equation has been applied. The equation below shows the method to calculate the contact angle
for composite particles,
20
n
aii
1
i b i
1 / n i1 a i
e x p
n
i
1
a bi
n
i 1
i
a
l n
i
i
(49)
where n is the number of types of the minerals in the composite, θ is the contact angle of the
i
mineral i, a is the fractional surface liberation of the mineral i, b is the fitting parameter to
i i
change the weight for the mineral i andθ is the contact angle for a composite particle. For a 2-
componet particle, the Eq. (49) can be simplified as below:
e x p
1 1
ln
1 2 2
ln
2
a b a b (50)
Below is an example to show the contact angle calculation of a 2-component particle.
The particle surface is composed of 60% galena and 40% silica (gangue), e.g., Figure 2.7.
Figure 2.7: An example of contact angle calculation for a composite particle.
g
4a 0n %g
u e
g
6a 0le %
n a |
Virginia Tech | Chapter 3: MODEL VALIDATION
3.1 Pilot-scale Flotation Test
The pilot-scale galena flotation tests were conducted by Welsby et. al. [43] to show the
effects of particle size and surface liberation on the size-by-class flotation recovery (R ) and
ij
galena flotation rate constants (k ), where subscript i represents different size classes and
ij
subscript j represents different liberation classes. While specific details of the whole experiments
can be found in the original thesis [6], the relevant test parameters related with the simulation
process are listed in Table 3.1. Size analysis and liberation analysis were conducted on the feed
samples using cyclosizing and Mineral Liberation Analyzer (MLA), respectively. The size-by-
class mass distribution matrix of feed is shown in Table 3.2. The contact angles for galena and
gangue are assumed to be 70o and 5o, respectively. The fitting parameters for corrected contact
angle of composite particles are b =0.957 and b =1.973. The adjustable parameter in the bubble
1 2
coarsening model C equals to 22.71 in the simulation.
Table 3.1 Pilot-scale flotation test parameters. Welsby, S., S. Vianna, and J.-P. Franzidis,
Assigning physical significance to floatability components. International Journal
of Mineral Processing, 2010. 97(1): p. 59-67. Used under fair use, 2015.
Variable Value
Frother Type MIBC
Frother Dosage (mg/kg) 25
Residence Time (min) 4.68
Froth Height (cm) 7
Impeller Speed (rpm) 1200
Air Flow Rate (L/min) 110
Cell Volume (L) 40
Cell Area (cm²) 35x35
Solids in the slurry (wt.%) 44.31
Pulp Density (g/mL) 1.48
22 |
Virginia Tech | 3.1.1 Size-by-liberation Simulation Results
Figure 3.1 (A) shows the effect of particle size and surface liberation on the overall rate
constant (k ), which is generated from the flotation experiments. Figure 3.1 (B) is the simulation
ij
results from the flotation model. As shown, with the mean particle size increasing, at a given
liberation class, the flotation rate constants first increase, then reach the highest value, and finally
decrease. For particles at a given size class, the higher the surface liberation class, the bigger the
overall flotation rate constant. Furthermore, when the particles are fully-liberated, the rate
constant reaches the maximum at each specific particle size.
Figure 3.2 shows the effect of particle size and surface liberation on the galena recovery
(R ). One can convert rate constants into recoveries for a continuous flotation cell under perfectly
ij
mixed condition by using the Eq. (34). Figure 3.2 (A) is generated from flotation tests, while
Figure 3.2 (B) is the output from the flotation model. It can be seen that at a given particle size
the overall galena recovery increases with increasing the degree of surface liberation for all
particle sizes. The fully-liberated particles have the highest recoveries among other particles at a
given particle size. The optimum particle size range for galena flotation locates between 20 μm
and 40 μm, since the curves which represent +28/-38 and +19/-28 size classes are higher than
any other curve in the figure. The model prediction is excellent for medium-size particles. For
coarse and fine particles, however, the difference between the test results and model results is
large. For some reason, the model overestimates the recovery of coarse and fine particles.
3.1.2 Size-by-size Simulation Results
By combining the information of size-by-liberation feed (m ) and recoveries (R ), one
ij ij
can get the size-by-size overall flotation recoveries (R) and size-by-size overall rate constants (k)
for galena particles, which is shown in Figure 3.3. Figure 3.3 (A) shows the experiment results
while Figure 3.3 (B) shows the simulation results. The simulation of k and R is quite similar with
the results from the experiments. As shown, fine particles and coarse particles have relatively
small k and R. There is one peak in each curves, which represents that the medium-size particles
have the highest k and R values. In this case, one can see that the optimum particle size interval
for galena flotation is also 20 to 40 μm, which is consistent with the conclusion drawn from
Figure 3.1.
26 |
Virginia Tech | 3.1.3 Normalized Rate Constants Simulation
The normalized flotation rate constant, k/k , which is also called rate constant ratio, is
max
defined as the ratio of the rate constant at a given particle size and liberation class, to the rate
constant for the full-liberated particles of the same size. Figure 3.4 (A) shows the relationship
between rate constant ratio (k/k ) and the surface liberation class. It can be seen that the rate
max
constant ratio as a function of surface liberation is essentially the same for each particle size. An
pure empirical equation, therefore, was developed by Graeme J. Jameson [44] to fit the different
points in Figure 3.4 (A), which is represented by the solid curve in Figure 3.4 (A). Below is the
equation,
29
L a x e b x c (51)
where L=k/k and x is the fractional liberation (0 ≤ x ≤ 1); the constants have the values
max
a=0.27, b=1.30 and c=10.80.
The results shown in Figure 3.4 (A) is extremely important because it shows that the rate
constant for a fully-liberated ore only depends on the hydrodynamics and surface chemistry. The
rate constant of a particle of a given liberation class can be first determined by that of the fully-
liberated particle of the same size. Then a factor, k/k , can be applied, which represents the
max
effect of liberation.
Figure 3.4 (B) is the k/k simulation results from the floatation model. The solid curve
max
in this graph is the same as the curve in Figure 3.4 (A), in order to compare the simulation results
with the experiment results conveniently. At a given surface liberation, the simulated data points
locate closer with each other compared to the data points from experiments, especially for the
higher liberated particles. However, the overall simulation results seem to be similar with the
experiment results.
The Jameson’s equation, however, has some flaws. Firstly, the equation includes three
fitting parameters, a, b and c, which makes the model complex. Secondly, as the mineral is fully-
liberated, the rate constant ratio from the equation is not exact 1, which is not consistent with the
definition of the rate constant ratio. A new empirical equation, therefore, has been developed by
using statistical analysis, which is shown below,
x
L (52)
abx6
where a=4 and b=3. In the new equation, only two fitting parameters are used. As shown in
Figure 3.5, one can easily see that two curves based on Eqs. (51) and (52) respectively are
extremely similar, and almost overlap with each other. It is obvious that the new equation, which
is simple, has the same function on predicting rate constant ratio from surface liberation data as
the Jameson’s equation [44]. |
Virginia Tech | Eqs. (51) and (52) are significant findings, although both are empirical. With these two
equations, one may predict flotation performances by testing only single-size but different-
liberation ore samples, which is much simpler than testing all-size fractions. The detailed
procedure of model prediction is shown below:
Firstly, one can separate a single-size sample from the flotation feed, e.g., +28/-38, and
use QEM-scan or MLA technology to determine the surface liberation. Next, one can do
flotation tests using all-size sample as the feed, then analyze test results, and finally calculate the
flotation recoveries and rate constants for the particles that have different surface liberations in
+28/-38 size class. At this time, one can use flotation model to simulate the k for different
ij
liberated particles by adjusting fitting parameters in the flotation model, e.g., b , b and C.
1 2
Meanwhile, since rate constants have been determined, the k/k can be simulated by applying
max
Eq. (52). In Figure 3.6, one can clearly see that the results from the model and the results from
the experiments are very similar, which can be demonstrated by the fact that the curve based on
Eq. (52) goes through almost every experimental points. At present, one can predict the k/k of
max
different-liberation but the same-size particles. As discussed before, it is reasonable to assume
that k/k is only dependent on the surface liberation of particles and independent of particle
max
size. Since the rate constants of fully-liberated particles can be simulated from the flotation
model, one may calculate the rate constants (k ) for any particles by multiplying k/k at given
ij max
liberation class with the rate constant for the fully-liberated particles of the same particle size.
30
1
0
0
0
0
0
.0
.8
.6
.4
.2
.0
0 2 0 4 0 6 0 8 0 1 0 0
)
x
a
m
k
/
k
(
o
ti
a
r
t
n
a
t
s
n
o
c
e
t
a
R
J
N
ae mw e s
e
om
S
np
u
's Eir
ic
r f a
q u aa
l E
c e
tio nq
u a
lib
tio
e r
n
a t io n ( % )
Figure 3.5: Comparison between Jameson’s equation and Eq. (52) (new empirical equation)
shows that both equations have almost the same function on predicting k/k as
max
a function of surface liberation. |
Virginia Tech | Table 3.3 shows the procedure of flotation model prediction directly. The green data in
the table are the results from flotation tests. The red ones are the simulated data from flotation
model, which are rate constants for fully-liberated particles that have various sizes. The blue data
represents the k calculated by multiplying the red values with the respective k/k for different-
ij max
size particles. Figure 3.7 shows the comparison between the simulation from single-size fraction
feed and the simulation from all-size fraction feed. The solid curve represents the Eq. (52), which
is the same in both figures. Because only single-size fraction is used in the simulation, the points
in Figure 3.7 (A) locates much closer to the solid curve compared to the points in Figure 3.7 (B).
However, generally, the difference between these two figures is small, which makes it doable to
use single-size fraction to predict the flotation performance. This is a new method to simulate
flotation process, which is quite efficient and economical. Great amounts of time and cost can be
saved since only single-size class particles will be studied in surface liberation analysis.
Meanwhile, the accuracy and the preciseness of the new simulation procedure can be compared
with those from old simulation process.
31
1
0
0
0
0
0
.0
.8
.6
.4
.2
.0
0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 1 0 0
+ 2 8 /-
M
3 8o d me
l
)
x
a
m
k
/
k
(
o
ti
a
r
t
n
a
t
s
n
o
c
e
t
a
R
S u r f a c e lib e r a t io n ( % )
Figure 3.6: Comparison between k/k from single-size class particles (+28/-38 μm) and
max
k/k from the model.
max |
Virginia Tech | The porphyry flotation test was conducted in a 4-liter flotation cell. The weight of sample
was 1.5 kg. The frother used in the test was Dow 250, which was a common-used frother in
industry. The dosage of Dow 250 was 60 g/ton. The specific energy in the cell was 3.33 kW/m3.
The solid weight content in the pulp was 34%. The height of froth was 5 mm. The flotation time
was 16 min. The superficial gas rate in the test was 0.28 cm/s.
Although there are several test parameters which are provided by Outotec, the input data
for simulation is still not enough. Table 3.5 lists a series of assumed values for parameters used
in the simulation, e.g., contact angle, bubble ζ-potential, and particle ζ-potential.
3.2.1 Flotation Test Results
Figure 3.8 shows the size-by-size flotation recoveries of chalcopyrite, pyrite and other
minerals (gangue). As shown, the recoveries of chalcopyrite and pyrite are both much higher
than the recovery of gangue, from which one can conclude that the flotation is an effective
separation way to remove gangue from valuable minerals. For the fine particles, the selectivity of
flotation is not as good as that of coarse particles. One possible reason is that in the froth phase
large amount of fine particles can come into the concentrate by entrainment, during which
hydrophobic minerals cannot be distinguished from hydrophilic gangue. For coarse particles, the
recovery of chalcopyrite decreases faster than the recovery of pyrite.
100
80
)
%
60
(
y
r
e
v Chalcopyrite
o
ec 40 Pyrite
R
Other Minerals
20
0
1 10 100 1000
Particle Size (m)
Figure 3.8: Size-by-size recovery (R) of chalcopyrite, pyrite and other minerals from the
laboratory-scale flotation tests.
35 |
Virginia Tech | By applying the Eq. (34), one can convert the recoveries of different minerals into the
overall rate constants of different minerals, which is shown in Figure 3.9. One can see that the
rate constants of gangue are almost 0 at different particle sizes, while the rate constants for
chalcopyrite and pyrite are much larger than 0 and reach the highest value when the particle size
is between 20 μm and 60 μm.
3.2.2 Size-by-liberation Simulation Results
After putting all necessary parameters into the simulator, one can simulate size-by-
liberation flotation results. The simulated data is shown in Table 3.6. Figure 3.10 presents the
simulation results of overall rate constants (k ). At a given particle size, the higher liberated
ij
mineral particles have the larger rate constants. At a given surface liberation, the rate constant is
as function of particle size. There is always a peak on each curve when the particle size is around
30 μm, which represents the optimum particle size for the flotation.
Figure 3.11 shows the overall chalcopyrite recovery simulation results obtained by
varying particle sizes and particle surface components. The results suggests that the recovery for
the particle of which diameter is smaller than 100 μm is extremely high. For the large particles
that are larger than 100 μm, the recovery substantially decreases with the increasing particle size,
regardless of the particle surface liberation. A proper explanation for this phenomenon is that the
probability of detachment (P ) is much larger for coarse particles than that of fine particles,
d
which will cause that coarse particles detach from bubble surfaces more frequently. Large
36
3
2
1
0
1 1 0 1 0 0 1 0 0 0
)
%
(
nt
a
st
n
o
c
e
at
r
all
r
e
v
O
C
P
O
h a lcy
r iteth
e r
o
M
p y
in
r ite
e r a
P
ls
a r tic le S iz e ( m )
Figure 3.9: Size-by-size overall rate constants (k) of chalcopyrite, pyrite and other minerals
converted from the overall flotation recovery (R). |
Virginia Tech | The conclusions drawn from Figure 3.10 and Figure 3.11 are consistent with the
conclusions from the pilot-scale flotation simulation, which can be great evidences to support the
rightness of the flotation model.
3.2.3 Size-by-size Simulation Results
Since the size-by-liberation data of rate constant and recovery has been simulated, one
can further simulate size-by-size recoveries and rate constants of the flotation. Figure 3.12
compares the simulation results with the results from flotation tests. In two plots, the red dots
represent the data obtained from tests and the black solid curves represent the simulation results
from the model. It is shown in Figure 3.12 (A) that the overall recovery of chalcopyrite first
increases to a maximum value and then decreases rapidly with the increase of the particle size.
Figure 3.12 (B) shows the same trend of overall rate constant as a function of particle size. There
are some gaps between the experimental points and simulation curves in both figures, however,
the general pattern of the simulated curves fits the experiment results well, that is to say, the
flotation model is quite reliable, which can make reasonable predictions after entering a series of
necessary input parameters.
38
1 0
8
6
4
2
0
0
0
0
0
0
1
S u r fa c e L ib e r a tio n
1 0 1 0 0 1 0 0 0
)
%
(
y
r e
v
o
c e
R
1
9
7
5
3
1
050005 0%%%%% %
P a r t ic le S iz e ( m )
Figure 3.11: Size-by-liberation overall flotation recovery (R ) of chalcopyrite from flotation
ij
model. |
Virginia Tech | 3.2.4 Normalized Rate Constant Analysis
Since the size-by-liberation rate constants have already been determined, one can study
the rate constant ratio (k/k ) by applying the same method as before. As shown in Figure 3.13,
max
at a given surface liberation, the rate constant ratios for different-size particles are very similar,
which is represented by the points aggregating with each other in the vertical direction. The solid
line in the figure represents the model for rate constant ratio, which is Eq. (52). In this specific
case, a=1.30 and b=0.30. The model fits the test data well, which makes it possible to predict the
flotation performance from single-size but different-liberation particles.
However, it is necessary to note here that Figure 3.13 only includes four different size
classes, in which the mean particle sizes are all less than 100 μm. It seems that the rate constant
ratio for coarse particles does not subject to this property, which is that the flotation rate constant
only depends on particle surface liberation, but has nothing to do with the particle size. For
coarse particles, the effect of particle size on the rate constant ratio may be so strong that one
cannot simply ignore it. This finding is quite similar with Jameson’s, who excluded the particles
that were larger than 106 μm when analyzing the rate constant ratio.
Figure 3.14 (A) and (B) show the simulation results of size-by-class rate constants from
all-size fractions feed and those from single-size fraction feed, respectively. The overall patterns
of these two figures are extremely similar. However, one main difference between these two
figures is that the rate constants for coarse particles (>100 μm). The rate constants from all-size
40
1
0
0
0
0
0
.0
.8
.6
.4
.2
.0
0 2 0 4 0 6 0 8 0 1 0 0
)
x
a
m
k
/
k
(
o
ti
a
R
nt
a
t
s
n
o
C
e
t a
R
S u r fa c e L ib e r a tio n ( % )
P a r tic le S
8 9
5 3
2 8
1 0
M o d
iz
m
e
e
l
Figure 3.13: Simulation of normalized rate constant (k/k ) as functions of surface
max
liberation and particle size. |
Virginia Tech | Chapter 4: SIMULATION
The flotation model discussed and validated in previous chapters is developed from first
principles, which considers both surface chemistry parameters and hydrodynamic parameters
that have effects on flotation process. Therefore, the model can predict the flotation recovery,
rate constant and product grade from both physical and chemical conditions. In the present work,
several factors that affect flotation performance have been studied, e.g., particle size, surface
liberation (contact angle), ζ-potential, energy input, etc.
4.1 Single Cell Flotation
Effects of different parameters are studied, such as particle size, surface liberation,
superficial gas rate and ζ-potential. The flotation feed information is the same as Table 3.2,
which is a size-by-class galena mass distribution matrix.
4.1.1 Surface Liberation (Contact Angle)
Figure 4.1 shows the effect of surface liberation and particle size on the overall rate
constants of a galena flotation. As the surface liberation of galena increases, the contact angle (θ)
Figure 4.1: Effect of particle size and surface liberation on flotation rate constants (k). Input
parameters: aeration rate, 1.5 cm/s; energy dissipation rate, 15 kW/m3; frother,
25 mg/L MIBC; residence time, 4.68 min; froth height, 7 cm; particle ζ-
potential, -80 mV.
43 |
Virginia Tech | for the composite particles increases. As shown, at a given particle size, a higher liberated
particle has a higher flotation rate constant, which proves that increasing particle hydrophobicity
benefits the rate of flotation. The fully liberated particles, which have the larger contact angle
than any other particles have, have the largest flotation rate constant in each size class.
Figure 4.2 shows a contour plot for the changes in recovery as functions of particle size
and particle surface liberation. At a given particle size, the flotation recovery increases with
increasing the surface area of galena. It is shown that there is a valley on the plot, which
represents the optimum particle size for the maximum flotation recovery. In this typical case, the
optimum particle size for the flotation is between 20 μm and 30 μm.
Figure 4.2: Effect of particle size and surface liberation on flotation recovery (R). Input
parameters: aeration rate, 1.5 cm/s; energy dissipation rate, 15 kW/m3; frother,
25 mg/L MIBC; residence time, 4.68 min; froth height, 7 cm; particle ζ-
potential, -80 mV.
Eqs. (20) and (21) show that the hydrophobic force constant for bubble-particle
interaction (K ) increases with the increasing particle contact angle (θ) or with increasing
132
particle surface liberation. Hydrophobic force plays an important role in decreasing the energy
barrier (E ), which causes the increase of probability of bubble-particle attachment (P ) and
1 a
hence the flotation rate constant (k) and overall recovery (R).
44 |
Virginia Tech | 4.1.2 Froth Height
Figure 4.3 shows a contour plot for the changes in recovery as functions of froth height
and particle size. As shown, there is a remarkable reduction for the recovery of coarse particles
when the froth height increases. A high froth height means a high probability of particle
detaching from bubble in the froth phase, which would cause a decrease in froth recovery (R),
f
hence the overall recovery decrease. Generally, the froth height has more significant effects on
the recovery of coarse particles than on that of fine particles.
Figure 4.3: Effect of particle size and froth height on flotation recovery (R). Input
parameters: aeration rate, 1.5 cm/s; energy dissipation rate, 15 kW/m3; frother,
25 mg/L MIBC; residence time, 4.68 min; particle ζ-potential, -80 mV; θ = 35o.
4.1.3 Superficial Gas Rate
Figure 4.4 shows a contour plot for recovery varied with particle size and superficial gas
rate (or aeration rate). At a given particle size, a rise in airflow rate leads to an increase of
flotation recovery. This can be explained by that increasing superficial gas rate decreases particle
residence time in the froth phase (τ), hence the particles have less probability to detach from
f
bubble surface in the froth phase. This finding is in agreement with many industrial column
flotation results by other researchers in the past [45].
4.1.4 Energy Dissipation Rate
Figure 4.5 is a surface plot showing the effects of changing the mean energy dissipation
rate (ε) on overall flotation rate constant (k). The simulation results are plotted versus particle
45 |
Virginia Tech | size (d ). Generally, at a given particle size, increasing ε can result in the increase of flotation
1
rate constant, which can be attributed to the increase in the kinetic energy for bubble-particle
attachment. This finding is in agreement with the work done by Ahmed and Jameson [46], who
showed that high agitation rate led to an increase in the overall flotation rate constant. Another
reason for this phenomenon is that the bubble size (d ) decreases with increasing ε, according to
2
the bubble generation model [22]. Therefore, micro-bubbles have been applied in the flotation
process in order to increase recovery for fine mineral particles.
4.1.5 ζ- Potential
Figure 4.6 shows flotation recovery as functions of particle size and particle ζ-potential. It
is generally acknowledged that in sulfide minerals flotation the ζ-potential of bubbles and
particles are both negative. It is shown in the plot that a decrease in particle ζ-potential has a
good effect on the fine particle recovery, which is due to a reduction of electrostatic energy (V )
E
and hence a decrease in energy barrier (E ) for bubble-particle attachment. This finding is
1
consistent with the former work of many investigators, who concluded that the flotation recovery
reached the highest value when the magnitude of ζ-potential reached minimum [47-49]. For the
coarse particles, however, it seems that the effect of particle ζ-potential on recovery is quite
small. The reason is probably that for large particles, the beneficial effects of low particle ζ-
potential can be overcome by the large probability of detachment (P ).
d
Figure 4.6: Effect of particle size and ζ-potential on overall rate constant (k). Input
parameters: aeration rate, 1.5 cm/s; energy dissipation rate, 15 kW/m3; frother,
25 mg/L MIBC; residence time, 4.68 min; froth height, 7 cm; θ = 45o.
47 |
Virginia Tech | Circuit arrangement:
Figure 4.7 shows the original circuits used in Escondida chalcopyrite flotation plant in
Antofagasta, Chile. The circuit in the red block is simulated by the flotation model. The feed of
the simulated circuits is cyclone overflow, which directs to a bank of forty flotation cells served
as the rougher flotation circuit. The rougher concentrates are re-grinded by the grinding mill,
which is operated in closed circuit with a cluster of cyclone classifiers. The cyclone underflow is
returned to the ball mill, while the overflow proceeds to a cleaner circuit using flotation columns.
The cleaner tails are scavenged by a bank of twenty cells. The cleaner-scavenger concentrates
are combined with the feed for the cleaner circuit.
Figure 4.7: Flotation circuit used at Escondida chalcopyrite flotation plant in Chile. The
circuit in the red block is to be simulated by the flotation simulator.
Two circuits arrangement are considered for the simulation purpose, which are both
shown in Figure 4.8. Figure 4.8 (A) is a circuit with a re-grinding mill, which is similar to the
circuit used in Escondida. The only difference is that there are cyclone classifiers before the re-
grinding mill in Escondida flowsheet while the simulated flowsheet does not consider the effect
of the cyclone classifiers. Figure 4.8 (B) is another simulated circuit, in which there is no re-
grinding mill before the cleaner flotation column. Note here that the re-grinding mill is simulated
using a ‘pseudo grinding model’ developed by Aaron Noble, which is based on the mass balance
of materials in and out of the re-grinding mill.
49 |
Virginia Tech | 4.2.1 Effect of Re-grinding Unit
Effect of the re-grinding unit on flotation performance is studied and the simulation
results are presented in the recovery vs. grade plot, e.g., Figure 4.9. The dashed curve represents
the circuit without re-grinding unit, while the solid curve represents the circuit with re-grinding
unit. The contact angle for chalcopyrite is 80o in this simulation. It can be deduced that the re-
grinding unit has a good effect on the flotation performance from Figure 4.9, since the solid
curve (Re-grinding) locates a little bit higher than the dashed curve (No re-grinding). The reason
is that re-grinding can better liberate the mineral particles and decrease the coarse particle
percentage in the feed to cleaner flotation column.
Figure 4.10 shows the particle size analysis from the simulator at different locations in
the circuit. One can see an obvious shift of median diameter, D , in the figure. The feed to
50
rougher bank has the largest D , which is larger than 50 μm. After the rougher bank flotation,
50
the average particle size significantly decreases since the recovery of large particles in the
rougher cell is very low. The re-grinding mill will further decrease the D to around 20 μm,
50
which is the optimum particle size range for froth flotation. Therefore, the re-grinding unit
benefits the overall flotation recovery.
100
%) 75
(
s D
s
a 50
P
e
v 50
ati
ul
m
Mill product
u
C Feed to mill
25
Feed to rougher
= 80o
CuFeS2
0
0 50 100 150
Passing Size (m)
Figure 4.10: Size distribution curves of the materials at different locations in the simulated
circuit.
51 |
Virginia Tech | Chapter 5: SUMMARY AND CONCLUSION
5.1 Conclusion
First principle flotation models can provide us better understanding of each sub-process
in the flotation. The model developed at Virginia Tech, which was the first such model, can
predict the performance of single flotation unit or flotation circuits without large numbers of
preliminary laboratory flotation tests. The model takes both hydrodynamic and chemistry
parameters in the flotation process into consideration. The primary findings and contributions
presented in the thesis are summarized below.
1. In the present work, the flotation model has been verified using the flotation test results
obtained by other researchers. The model predictions are in good agreement with both the
laboratory-scale and pilot-scale test results, validating the first-principle flotation model
developed at Virginia Tech.
2. A bubble coarsening froth model has been incorporated into the flotation
model/simulator for the first time. The extended model can provide a better understanding of the
effect of bubble coalescence in froth phase. However, bubble-coarsening model does not include
the effects of the particle size and particle hydrophobicity yet.
3. A computer simulator has been developed for a froth model that can predict the effects
of particle size and particle hydrophobicity. The model has been developed recently at Virginia
Tech [36]; however, the model/simulator has not yet been incorporated into the extended
flotation model developed from first principles.
4. Analysis of the size-by-class flotation rate constants reported in the literature shows
that the rate constants (k ) can be normalized by the maximum flotation rate constant (k )
ij max
obtained with the fully-liberated particles [44]. Thus, a series of k vs. fractional surface
ij
liberation (x) plots can be reduced to a single k/k vs. x plot, which makes it possible to reduce
max
the number of samples that need to be analyzed for surface liberation using a costly and time-
consuming liberation analysis. It has been found in the present work that the flotation rate
constants predicted from the first-principle flotation model can also be normalized by the
maximum rate constants predicted for fully liberated particles.
5. The number of parameters to represent the k/k vs. x plots has been reduced from
max
three to two by means of a statistical analysis.
6. A series of parameters that affect flotation recovery and rate constants are studied
using the flotation simulator based on the first principle flotation model. The simulation results
show that the flotation rate constant and recovery are critically dependent on particle size,
surface liberation, particle hydrophobicity (contact angle), froth height, superficial gas rate,
energy dissipation rate, and ζ-potential. In general, flotation rate increases with increasing
contact angle at all particle sizes. A higher froth height can result in a lower recovery but higher
grade. In addition, increases in superficial gas rate and energy dissipation rate have beneficial
54 |
Virginia Tech | impacts on the flotation rate and recovery. The simulation results also suggest that a proper
control of ζ-potentials helps increase the recovery of fine particles.
7. The first-principle flotation model has been used to simulate the performance of a
flotation circuit that is similar to the Escondida copper flotation plant in Chile. In the present
work, the effects of particle hydrophobicity (contact angle) and particle size control by re-
grinding have been studied by simulation. The results show that the re-grinding of rougher-
scavenger concentrate greatly increased the overall copper recovery, which can be attributed to
the increased flotation rate with increasing surface liberation. The simulation results show also
that an increase in contact angle by way of using a stronger collector greatly increased the copper
recovery at the rougher flotation circuit. These results are consistent with the plant practice,
demonstrating the benefits of using a first-principle flotation model/simulator to improve the
performance of the real world flotation plants.
5.2 Recommendations for Future Research
Although the output from the flotation model fit the experiments data well and
reasonably, they may not be sufficient. There are some assumptions and simplifications of
flotation process in this flotation model, which can be improved by considering the following
suggestions:
1. All of the model predictions made in the present work have been made using
essentially a ‘foam’ model to account for the bubble coarsening effects. For industrial
applications, however, it will be necessary to use a froth-phase recovery model that can predict
the particle size and particle contact angles. Therefore, one should develop a more
comprehensive model simulator using the froth model recently developed by Park and Yoon [36]
to account for the particle effects on froth phase recovery.
2. In the present work, the values of particle ζ-potentials and bubble ζ-potentials are
directly entered into the simulator by operators. In practice, these two parameters are difficult to
measure. In order to make an easy-to-use flotation simulator, one should incorporate a simple
model that can be used by operators predict the ζ-potentials built-in subprograms. It is possible to
develop subprograms, because there is a wealth of information on such information in the
literature.
3. Develop a model to evaluate probability of particle orientation during bubble-particle
collision process in the pulp phase. In the current flotation model, one assumption is that each
particle has a uniform surface, so that the particle hydrophobicity is the same everywhere on the
particle surface, so is the contact angle. In reality, the particle surfaces are heterogeneous, not
homogeneous. Therefore, it will be useful to develop a new probability model to describe the
particle orientation when a particle collides with an air bubble. If an air bubble collides with the
hydrophobic part of the particle surface, then the probability of bubble-particle attachment (P )
a
will be high. While if an air bubble approaches to the hydrophilic part of the particle surface,
then there is little possibility to form a bubble-particle aggregate in the pulp.
55 |
Virginia Tech | Development of a Multi-Stream Monitoring and
Control System for Dense Medium Cyclones
Coby Braxton Addison
ABSTRACT
Dense medium cyclones (DMCs) have become the workhorse of the coal
preparation industry due to their high efficiency, large capacity, small footprint and low
maintenance requirements. Although the advantages of DMCs make them highly
desirable, size-by-size partitioning data collected from industrial operations suggest that
DMC performance can suffer in response to fluctuations in feed coal quality. In light of
this problem, a multi-stream monitoring system that simultaneously measures the
densities of the feed, overflow and underflow medium around a DMC circuit was
designed, installed and evaluated at an industrial plant site. The data obtained from this
real-time data acquisition system indicated that serious shortcomings exist in the methods
commonly used by industry to monitor and control DMC circuits. This insight, together
with size-by-size partition data obtained from in-plant sampling campaigns, was used to
develop an improved control algorithm that optimizes DMC performance over a wide
range of feed coal types and operating conditions. This document describes the key
features of the multi-stream monitoring system and demonstrates how this approach may
be used to potentially improve DMC performance.
ii |
Virginia Tech | would produce 3.2 million tons of additional clean coal in the U.S. from the same
tonnage of mined coal. At a market price of $50 per ton, the recovered tonnage represents
annual revenues of nearly $156 million for the U.S. coal industry or nearly $660,000 per
year for an average preparation plant.
Dense medium cyclones are frequently installed in banks of two or more parallel
units or in parallel with other separators (such as dense medium vessels) in order to meet
the production requirements of a given plant. Theoretical analyses show that the clean
coal yield from these parallel circuits is maximized when all of the separators are
operated at the same specific gravity cutpoint (Abbott, 1982; Clarkson, 1991; Luttrell et
al., 2000). This optimization principle is valid regardless of the desired quality of the total
clean coal product or the ratios of different coals passed through the circuits.
To illustrate the importance of this optimization concept, consider a 500-tph
circuit consisting of two identical DMCs operating in parallel. Both of the DMCs are
capable of producing an 8% total ash product when they operate at the same cutpoint of
1.55 SG. The overall yield from these two DMCs is 68.2%. However, the two units can
also produce a combined clean coal ash of 8% by operating the first DMC at 1.59 SG
(which produces an 8.5% ash) and by operating the second cyclone at 1.51 SG (which
produces a 7.5% ash). Although the combined product is still 8% ash, operation at a
cutpoint difference of 0.08 SG units reduces the overall yield from the combined circuit
from 69.6% to 68.2% (i.e., a 1.4 percentage reduction). If the cyclones are operated for
6,000 hrs per year, the annual revenue lost due to the cutpoint difference is $2.1 MM
annually (i.e., 1.4% x 500 ton/hr x 6000 hr/yr x $50/ton = $2,100,000). Therefore, it is
2 |
Virginia Tech | important that all dense medium circuits (vessels and DMCs) be operated at the same SG
cutpoint to optimize total plant profitability.
The industrial application of cutpoint optimization is relatively straightforward for
dense medium vessels. Vessels tend to operate at a density cutpoint that is predictable
based on the specific gravity (SG) of the feed medium. On the other hand, the segregation
of medium by the centrifugal field within a DMC makes it very difficult to estimate the
true SG cutpoint for cyclones. Typically, the underflow medium from a DMC has a
substantially higher SG than that of the overflow medium due to preferential
classification of the magnetite particles used to create the artificial medium. The
thickening of the medium tends to increase the SG cutpoint for the DMC above that of
the feed medium SG. Because of this phenomenon, the actual cutpoint of the DMC is
about 0.05-0.10 SG units higher than that of the measured SG of the feed medium. This
“offset” between true and measured density can vary substantially depending on the feed
medium density, extend of cyclone wear, and characteristics of the feed coal. In some
cases, negative offset values have even been reported from plant studies due to the
utilization of poor grades of magnetite. As a result, the normal practice of on-line
monitoring the feed medium SG using nuclear density gauges cannot be used to
accurately estimate the true cutpoint for DMCs. As discussed previously, this inability to
estimate and maintain the SG cutpoint can result in coal losses that have a tremendous
impact on plant profitability.
1.2 Objectives
The primary objective of this project was to develop an on-line monitoring and
control system to optimize the performance of dense medium cyclone (DMC) circuits.
3 |
Virginia Tech | 2.0 LITERATURE REVIEW
2.1 DMC Circuits
There are three major DMC circuits used throughout the coal processing industry.
These three circuits are gravity feed circuit, wing tank circuit, and pump feed circuit. In a
gravity feed circuit, the DMCs are located below the pulping column that feeds the
distributor for the cyclones. The pulping column is a vertical mixing pipe for the
circulating medium and feed material to the cyclones. In this type of circuit there is no
need for a pump. Since the pulping column must be an adequate length to provide a
desired feed inlet pressure, there is always a consistent feed pressure to the cyclones. The
specific gravity of the circulating medium can easily be measured prior to entering the
column and without the presence of the feed material, i.e. coal and rock, which provides
an accurate specific gravity measurement for the feed medium.
In a pump feed circuit, the DMCs are located above the sump and pump that feeds
the distributor to the cyclones. Since a pump provides the desired feed inlet pressure, a
pump feed circuit requires less building height than a gravity feed circuit. With this
circuit the feed inlet pressure dependent on the wear and proper maintenance of the feed
pump, as compared to relying solely on gravitational forces for the gravity feed circuit.
Unless medium is combined before being introduced to the pulping column within the
sump, the measurement of the feed medium is in the presence of the raw feed material to
the cyclones. Measuring the specific gravity of the feed medium in the presence of raw
feed material with a nuclear density gauge fails to provide an accurate value since the
specific gravity of the raw feed material will bias the specific gravity measurement.
5 |
Virginia Tech | In a wing tank circuit, the circulating medium is returned to a correct medium
sump before being introduced to a smaller mixing tank along with the raw feed material.
Measuring the medium pumped from the correct medium sump (without the presence of
raw feed material) with a nuclear density gauge provides an accurate method of
measuring the specific gravity of the feed medium. This circuit requires more building
area for the correct medium sump and pump, as compared to the previous two circuits in
which there is only one medium sump and pump.
2.2 DMC Control
The cutpoint is defined as the specific gravity at which a particle has an equal
chance of reporting to the overflow or underflow of the cyclone. Since the medium is
subjected to the centrifugal forces inside the DMC, the specific gravity of the medium
will increase toward the apex of the cyclone. This tendency always makes the specific
gravity of the medium in the overflow of the cyclone lower than the feed medium
specific gravity, and, accordingly, the specific gravity of the medium in the underflow
higher than the feed medium specific gravity, thus the cutpoint in a DMC is always
higher than the circulating feed medium.
There are various control implementations to monitor the specific gravity of the
medium entering a DMC. The most common method for the widely used pump feed
circuit is the placement of a nuclear density gauge on the feed pipe to the DMC (Figure
2). Since the medium has not been subjected to the cyclone’s forces at this point, this
implementation may not accurately provide the cutpoint. Also, since the difference
between the cutpoint and circulating feed medium specific gravity relies heavily on
various parameters (inlet pressure, geometry, fittings, etc.) and the density of the raw feed
6 |
Virginia Tech | medium streams are monitored in order to obtain a ratio of magnetite distribution for
control of the cutpoint (Burgess et al., 1987).
In most scenarios, about two-thirds of the medium that reports to the feed inlet of
the cyclone should report to the overflow of the cyclone. This split can be manually
calculated by collecting samples of the feed, overflow, and underflow medium streams,
and using the formula:
SG −SG
β= u f [1]
SG −SG
u o
where β is the medium split to overflow and SG, SG and SG are the specific gravity of
f o u
the medium of the feed, overflow and underflow streams, respectively. In order to obtain
an accurate specific gravity measurement of the medium streams, the measurement must
be obtained after the medium has been screened from the processed material. This
measurement is very useful since it can help identify problems with a cyclone, i.e.,
corrective actions can be performed when the medium split to overflow drops below two-
thirds (Luttrell et al., 2002).
2.3 Specific Gravity Measurement Techniques
There are various techniques for measuring the specific gravity of a particular
medium. The two most common methods are the manual specific gravity scale and the
use of a nuclear density gauge (Figure 3).
The manual method typically consists of using a customized sample collection
device with a volume of one-liter and dial-type spring scale. To calibrate the device
correctly, water is used to check the 1.00 specific gravity (SG) point and a known weight
8 |
Virginia Tech | Figure 3. Photographs of (a-left) density scale and (b-right) nuclear density gauge.
is used to check the SG point near the medium specific gravity. The method includes
properly collecting a representative sample of the medium with the collection device and
weighing the device filled with medium on the calibrated scale.
The nuclear density gauge is a device that is placed on a pipe in which the
medium is flowing. The nuclear source, typically an isotope of Cesium (Cs137) irradiates
a narrow beam of gamma particles that strike a detector on the opposite side of the pipe
after passing through the contents of the pipe. The specific gravity of the contents in the
pipe is calculated based on the attenuation of the gamma particles as compared to
calibration points set with water (1.0 SG) and a known SG point near the normal
operating range of the medium. The more dense the material in the pipe, the more the
gamma particles are attenuated and fewer gamma particles reach the detector. Fewer
9 |
Virginia Tech | gamma particles seen by the detector yields a higher specific gravity, and more gamma
particles seen by the detector yields a lower specific gravity.
2.4 Specific Gravity Cutpoint
The cutpoint is defined as the specific gravity at which a particle has an equal
chance of reporting to the overflow or underflow of the cyclone. Since the medium is
subjected to the centrifugal forces inside the dense medium cyclone, the specific gravity
of the medium will increase toward the apex of the cyclone. This tendency decreases the
specific gravity of the medium in the overflow of the cyclone and increases the specific
gravity of the medium in the underflow compared to the feed medium specific gravity.
The cutpoint specific gravity in a DMC is typically higher than the circulating feed
medium due to enhanced settling created by the centrifugal forces in the cyclone.
Besides sampling the feed, overflow, and underflow streams of the DMCs and
obtaining float/sink analysis from a commercial lab, the separation performance or
determination of the specific gravity cutpoint of a DMC can be predicted using a partition
model which assumes that the partition curve for each particle size class passes through a
common pivot point (Scott, 1988). The specific gravity (SG *) corresponding to the
50
pivot point can be estimated from an empirical expression given by Wood (1981):
SG* = 0.360SG +0.274SG +0.532SG −0.205 [2]
50 fm um om
where SG , SG and SG are the specific gravities of the feed, underflow and
fm um om
overflow streams, respectively. The SG* value represents the effective SG cutpoint of
50
an infinitely large particle separated under a zero medium viscosity. The second defining
10 |
Virginia Tech | term for the pivot point is obtained at a partition number that is numerically equal to the
medium split to underflow (Su) given by (Restarick and Krnic, 1990):
SG −SG
S = fm om [3]
u SG −SG
um om
Once the pivot point is identified, the specific gravity cutpoint (SG ) for each particle
50
size class can be obtained using (Wood, 1990; 1997):
SG = SG* +0.910Epln[(1−S )/S ] [4]
50 50 u u
To utilize this expression, it is assumed that the unknown Ep value for each particle size
class can be estimated using (Barbee et al., 2005):
Ep = D0.5 /(398D ) [5]
c p
in which D is the mean particle diameter (in mm) of each size class and Dc is the
p
cyclone diameter (in mm). Equations [2]-[5] show that it is possible to predict the SG
cutpoints for a DMC provided that the values of SG , SG and SG are known.
fm um om
Unfortunately, only the feed medium density (SG ) is typically measured in most
fm
industrial DMC circuits.
Since the previous equations calculate a specific gravity cutpoint of a DMC based
on all three medium streams, feed, overflow and underflow, specific gravity
measurements of these streams in a preparation plant could provide an accurate, real-time
specific gravity cutpoint. This cutpoint could be used as an input for the control system
11 |
Virginia Tech | hour with a maximum capacity of 1,300 raw tons per hour. Changes were completed to
each of the four circuits: coarse, intermediate, fine, and ultra fine, and a middlings
recovery circuit was added to the intermediate circuit in order to give the option of a
rewash of material. The middlings recovery circuit was designed to minimize the amount
of rock being pumped and re-handled in the plant and utilize the smaller 33-inch diameter
dense medium cyclones for the lower gravity separation. The 40-inch diameter primary
dense medium cyclone is utilized for the high gravity separation. This eliminates the re-
handling of rock increasing the wear life of the operating equipment components. The
overflow from the primary dense medium cyclone reports to the two secondary dense
medium cyclones in order to achieve the product split between the premium product and
middlings.
3.1 System Description
Equations [2]-[5] suggest that it is possible to predict and properly optimize the
SG cutpoints for a DMC provided that the values of SG , SG and SG are known.
fm um om
Unfortunately, only the feed medium density (SG ) is typically measured in most
fm
industrial DMC circuits. Also, density for the feed medium (SG ) is often measured with
fm
coal present so that the true medium density is not known. To overcome this limitation,
an improved monitoring and control system was developed that utilizes multi-stream on-
line measurements of the feed, overflow and underflow medium densities using low-cost
nuclear density gauges and pressure transmitters. A schematic of the multi-stream
monitoring system is provided in Figure 5.
The multi-stream monitoring system uses four nuclear density gauges to
simultaneously monitor medium density throughout the entire circuit. The first density
14 |
Virginia Tech | the feed medium (with and without coal present), underflow medium and overflow
medium. Data from the electronic sensors was continuously logged on-line using a PLC
data recorder. In principle, the real-time data from these sensors can be passed through a
mathematical algorithm to estimate the “true” SG cutpoint for the DMCs (see Equation
[4]). As such, this information makes it possible to fully optimize DMC cutpoints under
conditions of changing coal types and feed blends.
3.2 Equipment Setup
The nuclear density gauges were mounted in custom fabricated portable racks as
illustrated in Figure 6. A vertical feed pipe above the gauge was used to ensure a high
velocity flow that prevented any settling of the magnetite through the system. The
vertical feed pipe was fitted with an overflow at the top. Flow to the rack and density
gauge was set to provide an overflow stream at the top of the vertical feed pipe to ensure
a full feed pipe and to eliminate any air bubbles in the medium passing through the
gauge. The medium that passed through the nuclear density gauge and from the overflow
was routed back to the DMC feed sump. The density gauge rack for the underflow
medium sample was installed, along with the associated sampling points and piping, to
receive medium flow from either the clean coal or refuse drain-and-rinse screens.
After the installation of the density gauges, manual density (Marcy) cup
measurements were taken and flows were established to insure that the flow through the
gauges was an accurate representation of the actual medium flows around the DMC. The
next step involved energizing the three nuclear density gauges, checking the electrical
connections, setting the proper configuration parameters, and then standardizing the
gauges with clear water. Circulating medium was then routed through the gauges for the
16 |
Virginia Tech | 4.0 RESULTS AND DISCUSSION
4.1 Control System Response
Three series of test runs were conducted at low, medium and high SG setpoints
using the four SG monitoring stations. A complete summary of the experimental data and
associated partition computations for all three series of tests are provided in the appendix.
In each run, the values for the feed, underflow and overflow medium were recorded using
density gauges “F”, “U” and “O”. The reading from the existing plant density gauge
(“P”) was also recorded. At the midpoint of each test run, the feed coal to the circuit was
intentionally switched from a low-ash feed coal containing a low amount of reject rock to
a high-ash feed coal containing a large amount of reject rock. This switch was
intentionally initiated so that the effects of feedstock quality on DMC control system
response could be fully assessed.
Figures 8-10 summarize the results of the medium measurements conducted
around the DMC circuit for various density ranges and feed coal types. The data collected
for the lowest setpoint of approximately 1.3 SG is shown in Figure 8. For the low-reject
feed, a relatively constant value of 1.33 SG was obtained by both the plant gauge (“P”)
and the slipstream feed gauge (“F”). The density values for the overflow and underflow
streams were found to be about 1.21 and 1.57 SG, respectively. However, when the plant
switched to the high-reject feed, the reading from the slipstream feed gauge (“F”)
dropped by about 0.02 SG to about 1.31 SG. The reason for the drop is that the plant
gauge (“P”) misinterpreted the extra rock in the feed as high density medium. In
response, the plant control system added more water to drop the true density of the
20 |
Virginia Tech | circulating medium. Under this new condition, the densities of the overflow and
underflow streams changed to 1.21 and 1.50 SG, respectively.
The density data for the test runs conducted using an intermediate SG setpoint is
shown in Figure 9. In this case, the plant density gauge (“P”) indicated that the
circulating medium was 1.50 SG. The feed density (“F”) measured without coal showed a
slightly higher value of 1.51 SG when running the low-reject feed coal. The switch to the
high-reject feed coal sharply reduced this value from 1.51 SG down to 1.49 SG. Once
again, the existing plant density gauge (“P”) and control system misinterpreted the higher
rock content in the high-reject feed coal as too much medium and reduced the density.
This unexpected change was not apparent in the readings from the plant density gauge
(“P”) which remained relatively constant at about 1.50 SG during the entire test period.
Finally, Figure 10 shows the density values obtained for the test run performed
using a very high SG setpoint. While more variability was observed in the plant density
gauge (“P”) readings during this particular run, the data still showed a strong dependence
between coal type and true medium density. For the low-reject feed coal, the true medium
density reported by the feed gauge (“F”) was significantly higher than that from the plant
gauge (“P”). The trend was exactly opposite when running a high-reject feed, i.e., the true
medium density was significantly lower than the plant gauge reading.
4.2 Partitioning Response
After completing the medium response tests, three additional series of test runs
were conducted to examine the partitioning performance of the DMC circuit. The
detailed numerical data and associated partition computations for these tests are provided
in the appendix. As before, the test runs were conducted at low, medium and high SG
24 |
Virginia Tech | setpoints for different quality feeds (i.e., high- and low-reject feedstocks). In each test,
representative samples of the feed, clean and reject products were collected and subjected
to float-sink analysis. The float-sink analyses were conducted on a size-by-size basis for
12.7x6.35, 6.35x3.18, 3.18x1.41 and 1.41x0.707 mm size classes. Measurements of the
feed, underflow and overflow medium were also obtained using manual sampling and via
the on-line medium monitoring stations. The medium response data and partitioning
results are summarized in Tables 1 and 2, respectively.
Table 1. Effect of SG range and feed coal type on DMC medium behavior.
Low SG Medium SG High SG
Low High Low High Low High
Parameter
Reject Reject Reject Reject Reject Reject
Gauge SG 1.330 1.350 1.500 1.501 1.699 1.713
Feed SG 1.309 1.267 1.483 1.516 1.787 1.761
O/F SG 1.223 1.200 1.419 1.324 1.618 1.603
U/F SG 1.546 1.453 1.687 1.683 1.813 1.796
Table 2. Effect of density range and feed coal type on DMC partitioning performance
(SG ).
50
Low SG Medium SG High SG
Size Class Low High Low High Low High
(mm) Reject Reject Reject Reject Reject Reject
12.7 x 6.35 1.336 1.263 1.510 1.476 1.714 1.691
6.35 x 3.18 1.342 1.272 1.511 1.481 1.715 1.687
3.18 x 1.41 1.347 1.283 1.500 1.494 1.734 1.710
1.41 x 0.707 1.353 1.315 1.534 1.545 1.834 1.790
Composite 1.349 1.275 1.506 1.484 1.719 1.679
25 |
Virginia Tech | difficult to optimize DMC circuit performance in cases where the plant feed coal
characteristics routinely change throughout the production period. This problem can be
particularly severe when operating in the low density range (Chedgy et al., 1986).
4.3 Modified Control Strategy
There are numerous expressions available in the technical literature that can be used
to model DMC performance (Napier-Munn, 1984; Rao et al., 1986; Davis, 1987; Scott,
1988; Clarkson and Wood, 1991; Barbee et al., 2005). One such model reported by Wood
(1990) indicates that the SG cutpoint for a DMC can be estimated using an empirical linear
equation of the form:
SG = a + a (SG ) + a (SG ) + a (SG ) [6]
50c 0 1 um 2 om 3 fm
where SG , SG and SG are the specific gravities of the underflow, overflow and feed
um om fm
medium, respectively, and a , a , a and a are fitting coefficients. SG represents the
0 1 2 3 50c
effective SG cutpoint of relatively large (>4 mm) particles that are efficiently separated.
Once known, the density cutpoint (SG ) for other particle size classes can be estimated
50p
from:
SG = SG + 0.0674(1/D -0.10) [7]
50p 50c p
where D is the particle diameter (mm) of the size class of interest (Wood et al., 1987).
p
These equations indicate that it is possible to predict and properly optimize the SG
cutpoints for a DMC provided that the values of SG , SG and SG are known.
fm um om
The results of the in-plant DMC tests demonstrate the importance of designing
plants with layouts that allow for the proper monitoring of circulating medium. Ideally,
28 |
Virginia Tech | dense medium circuits should be configured with sufficient headroom to allow return
medium to be recombined, homogenized and monitored prior to the addition of feed coal.
However, this preferred option is not available in many existing coal preparation facilities
operating in the U.S. Therefore, another option is needed for this type of existing
situation. One promising alternative is to utilize information from only the overflow and
underflow medium streams for controlling the DMC cutpoint. While not “ideal”, this
approach is believed to offer improved monitoring and control in cases where feedstock
quality and cutpoint values change frequently and dramatically. This scheme assumes
that the cutpoint density (SG ) can be estimated using a simplified form of Eq. [6], i.e.:
50c
SG = a + a (SG ) + a (SG ) [8]
50c 0 1 um 2 om
where SG and SG are the specific gravities of the underflow and overflow medium,
um om
respectively, and a , a and a are fitting coefficients. In this case, the constant a shown
0 1 2 3
previously in Eq. [6] is assumed to be zero. For the data collected in the present work, the
fitting coefficients were found to be a =0.640, a =0.518 and a =-0.290.
0 1 2
Figure 13 shows the correlation between the cutpoint values (SG ) predicted
50c
from Eq. [8] and experimentally measured cutpoint (SG ) values from float-sink analysis
50
of the 12.7x6.35 mm size class. As shown, this simple mathematical model provides a
very good estimate of the particle cutpoint density for this particular operation. The
model can be used by the plant control system to adjust the medium SG up or down to
maintain a constant SG cutpoint so that DMC performance can be optimized. By
avoiding the use of feed medium density in the control algorithm, problems associated
with changing feedstock quality can be somewhat mitigated by this approach. In practice,
29 |
Virginia Tech | 5.0 CONCLUSIONS
Test data collected in the current study indicate that optimization of dense
medium cyclone (DMC) performance cannot be realistically achieved for cases in which
only the feed medium density is monitored in the presence of coal. This problem appears
to be created by incorrect density readings which interpret the presence of large amounts
of high-density rock as overdense medium. To avoid errors in density readings, it is
recommended that plant circuits be designed with a means to monitor the true density of
circulating medium in the absence of feed coal. Ideally, representative streams of return
overflow and underflow medium from the drain-and-rinse screens should be recombined,
homogenized and then passed through the density gauge. This layout requires that the
plant be designed with sufficient headroom for a monitoring station between the drain-
and-rinse screens and the DMC feed sump. In existing facilities where combined return
medium cannot be realistically obtained, a control system that makes use of only the
medium SGs of the overflow and underflow SG values is suggested as a possible
approach for dealing with feedstocks that are highly variable. This multi-stream
monitoring system makes use of a simple mathematical model to estimate the DMC
cutpoint density using only the returned overflow and underflow medium streams. In this
approach, the feed medium density would still be monitored, but only as a secondary
check on whether the circuit is behaving logically in response to perceived changes in
feed quality.
31 |
Virginia Tech | The CHS is essentially a driveable conveyor system that is capable of following
the Continuous Miner throughout the mine. As the Continuous Miner removes coal from
the seam, it is fed to the first unit of the CHS – called the feeder-breaker. The feeder-
breaker, as its name implies, breaks the coal into smaller sizes and then sends the coal on
its journey through the CHS and out of the mine at rates up to 20 tons of coal per minute,
depending upon the model. With this great capacity to move coal quickly, dramatic
increases in coal production can be achieved. Figure 2 depicts a multiple unit CHS
navigating a typical mine.
Figure 2. Depiction of a CHS navigating a Coal Mine
The three main parts of the CHS are the MBC (Mobile Bridge Carrier), the Pig
(Piggyback Conveyor) and the RFM (Rigid Frame Modular) tail-piece. As its name
implies, the MBC is a tracked vehicle supporting the Pigs and has a driver located in the
right rear of the MBC. The Pig, which varies in length form 30’ to 40’ depending upon
the CHS configuration, is a rigid conveyor section used to span two MBCs or the last
MBC and the RFM. One MBC and a Pig are considered a unit and 5 or more units might
2 |
Virginia Tech | be linked together in a typical mining application. The RFM connects the last MBC to a
stationary type conveyor system for the final stage of transferring coal to the surface.
This section of conveyor belt is not very mobile and must be dragged into location by an
MBC, shuttle car or some other machine.
As hopefully can be inferred from the figures and brief discussion, the CHS
requires a skilled team of operators to efficiently traverse the mine. Since coal mining by
nature commands high wages, the yearly costs for skilled operators can be quite
expensive. These annual costs are overshadowed by the fact that coal mining is a
dangerous business. Even though mining safety has been greatly increased, the potential
for catastrophe is omnipresent making any reduction in the number of persons necessary
in a mine highly desirable. To address these and other concerns, Long-Airdox has
expressed the need to automate the Continuous Haulage Systems to increase system
efficiency and coal throughput.
As a result, Long-Airdox and VA Tech are working in close collaboration to
develop the necessary technologies to automate a Full Dimension Continuous Haulage
System. To this end, the VA Tech team is tasked with research, development and testing
of the necessary sensing, data analysis, driving rules, control algorithms and hardware re-
design. The VA Tech team is responsible for developing the required technologies for
automation and providing the necessary technology transfer through documentation.
In order to gain insight to the problem, the team members were able to drive a
CHS that was being refurbished for a mining company in early Fall semester of 1998.
Figure 3 shows the refurbished CHS that was test-driven by the VT team. Note the first
unit is the feeder-breaker, with the wide front that catches coal being fed from the
Continuous Miner. After driving the CHS, it was quite apparent that a high degree of
skill and cooperation between team members is required to efficiently traverse a mine.
The inertia and system response was observed in order to lay a foundation for the
automated control system. Armed with a better understanding of the CHS, development
of the necessary technologies for automation resumed.
A major focus on automation was path planning; how the CHS would navigate
through a mine. Path planning is heavily used in robotics, where a robotic machine must
3 |
Virginia Tech | navigate within some workspace. Typically two means are used for navigation; a robot
can either have the layout of a workspace programmed into its memory, or it must
Figure 3. Continuous Haulage System used for VA Tech Team Test Driving
sense its location with respect to its surrounds and navigate accordingly. Because all
mines do not share exact layouts and are not typically cut exactly to specifications,
requiring that a company operating an automated CHS program the mine layout was not
deemed a suitable solution. However, requiring that the automated CHS be capable of
sensing its location within a mine and navigating accordingly requires more effort and
sophistication in the software algorithms, but is thought to provide a more flexible and
intelligent system. Because of the strategy adopted, sensors are needed to measure the
distance and incidence of the walls. Outfitting the CHS with enough sensors to fully
describe its configuration at a given moment in time is also necessary. All this
4 |
Virginia Tech | information will have to be gathered and processed in order to issue the appropriate
position or velocity commands to each MBC in the CHS.
In order to develop, test and demonstrate competency with the sensing, path
planning and control algorithms, a suitable test bed is needed. Having production MBCs
available for instrumentation and testing at will is not possible, therefore an inexpensive
alternative is necessary. The author has been tasked with the development of a 5-unit
prototype Continuous Haulage System that will provide continual development and
testing of the overall automation strategies. The prototype development includes scaled
models of an MBC and Pig, and the low-level microcontroller-based motor controllers
necessary to provide motion. Responsibilities have grown from just developing and
constructing the prototypes, to developing sensor interfaces and the communications
hierarchy necessary for gathering and parsing all the data to a laptop PC for computation
of all algorithms. All computations will be performed on an IBM Thinkpad laptop
personal computer because it is the most cost-effective means for the prototype.
Although Long-Airdox intends to outfit each MBC in a production CHS with a custom
designed PLC (Programmable Logic Controller), their estimated $6000 price tag places
them beyond the reach of the initial project budget. Any testing on full-scale production
MBCs requires specific hardware and software, though as much of the prototype
equipment as possible will be modified for consistency and reduced development times.
Because the authors’ work on this research project has been heavily project
oriented and has required the creation of much hardware and software, this document
serves as an important source of documentation for the remaining team members who
will have to use the hardware and software in future testing. In the following sections,
overviews of the prototype vehicles, electronics and software are presented. A discussion
on the operation and use of the SICK Optic LMS 200 laser measurement device is
included. Although the topics are all intertwined, they have been separated in attempt to
provide clarity to each subsystem. Finally results from current testing and
conclusions/recommendations will be made.
Although the author is somewhat disappointed to be graduating prior to total
completion of the project, it is hoped that this document will serve as a useful and
beneficial tool for the other team members.
5 |
Virginia Tech | Chapter 2: Prototype Continuous Haulage System
2.1 Prototype Introduction
The prototype Continuous Haulage System has many levels to its development
and construction. On the basic level, a properly scaled clone of the production CHS was
needed. The main requirements for the prototype structure were rigidity, reasonable
weight, consistent scale and proper function. The two main structures to replicate are the
MBC and the Pig. Since the Pig is modeled as a rigid link for the purposes of the
prototype, the main functions to replicate were the MBC TRAM LEFT, TRAM RIGHT,
IN-BY, OUT-BY and the dolly travel. TRAM refers to the controlling the speed of the
tracks, while IN-BY and OUT-BY changes the elevation of the front or rear conveyor
sections. Since they do not appear to place any requirements on the control system the
prototype would not incorporate the IN-BY and OUT-BY functions. Long-Airdox
assumed responsibility for developing a separate system for controlling these functions.
The dolly travel allows compliance between two MBCs by enabling the front pig pin to
slide five feet along the front-to-rear axis of the MBC. This extra compliance is deemed
essential for driving the CHS through a mine.
The next level of development has two parts; developing the prototype electronics
hardware and the software which includes the microcontroller-based motor controller,
multi-processor communications for data gathering and interfacing to a control PC. The
electronics system was chosen to provide a scalable system–as more functionality or
processing power was needed, extra microcontrollers could be added to perform the
required additional function
The final level of prototype development pertains to a high-level interface and
control program running on an IBM Thinkpad laptop computer. The interface and
control program is responsible for receiving in all sensor data, performing all necessary
data analysis, path planning and control algorithms and parsing command velocity data
back to the appropriate MBC. Since cost prohibits use of the Long-Airdox PLC, all inter-
MBC communications expected between full scale MBC PLCs must be simulated by the
interface and control program. Although these levels are heavily intertwined, discussions
on their development will be separated in an attempt to provide clarity for each.
6 |
Virginia Tech | 2.2 Prototype Hardware
The first step in prototype development was deciding upon a suitable scale for the
models. Since an MBC drives much like a military tank, a RC (Radio-Controlled) tank
model was viewed as a suitable base for the prototype MBCs. By using RC tanks as the
foundation for the prototypes, it was hoped that significant reductions in development
time would be realized. As a result, available RC tank models somewhat drove the
prototype scale. After reviewing the sparse information on various models, it appeared
that most tank models were approximately 7-9% of the full-scale MBC. However, after
purchasing two models it was apparent that available radio-controlled tank models had
some significant disadvantages.
Although the first tank model purchased was very inexpensive, it was very flimsy
being made of plastic and more suited to higher speed operation. Because low speed
control is critical, extensive modifications to the gear train for additional speed reduction
would be required. Abandoning the first model, a King Tiger Tank model from Tamiya
America, Inc. was purchased on recommendation from a RC model dealer because the
chassis was made from stamped aluminum and the tracks were metallic. Even though the
model is quite expensive, having metallic tracks on the prototype is ideal. However, the
models are no longer manufactured with metallic tracks; only plastic tracks are currently
produced. Although disappointing, the model was larger and more ruggedly built than
the first model. During assembly of the model, it became apparent that the King Tiger
Tank model would also required heavy modifications to the powertrain. The
modifications would be necessary because it had only one motor controlling both tracks;
directional control of the factory model is accomplished by engaging and disengaging
clutches, implying the model is incapable of reverse. Therefore, a second motor would
be required to provide separate, reversible control of each track. Modifying the
powertrain proved to be a rather involved task, necessitating many hours of custom
machining. Another concern with the RC models was the uneven scale; typically the
width of the model was a desirable scale, but the length was much too great. Because of
all the problems encountered with the models, design of custom prototypes was viewed
as more cost-effective and a more efficient use of time.
7 |
Virginia Tech | Designing custom prototypes involved a few important considerations, of which
scale was again the starting point. Since the both RC tank models were odd sizes, it was
decided to make the prototypes an even 10% scale replica of the CHS. This scale would
provide a larger platform for supporting the necessary sensors and hardware needed for
the project.
The drivetrain and motors would be specified first, and then the chassis would be
designed accordingly. The outside-in design methodology was used to keep a consistent
scale and to simplify the design; it started with the tracks and worked towards the inside
of the model. Since a source for properly sized steel tracks was unavailable, the plastic
tracks and drive sprockets from the King Tiger Tank model were incorporated into the
design and were purchased from Tamiya America, Inc. Because the drive sprockets had
a 2” outer diameter, little ground clearance would be available. Therefore, selecting a
motor that would provide enough torque at scaled prototype speeds while providing
suitable ground clearance became a significant issue with the design. Using a geartrain
or flexible coupling as a means to elevate a large motor and increase ground clearance
would add cost and complexity - not a highly desirable option. After scouring the
catalogs of many electronic hardware suppliers, some small gearhead dc motors with an
offset output shaft were located. Because of the integral gear reduction, these motors had
a slow output shaft speed with good torque and would allow direct mounting of the
output shaft to the sprocket via a simple, custom-made hub. An added benefit of these
particular motors is the integral optical encoders, which allow for position or velocity
feedback. The motors were purchased and fitted to a prototype test chassis; it was a
compact design, but appeared to be quite feasible.
With the drivetrain and motors specified, the chassis was designed as a simple
structure made from 16-guage mild steel sheetmetal stamped into a U-shape. A lip on top
of the chassis is provided as a mounting surface for the canopy. A template was made so
that all machining to mount the motors and drivetrain be completed before stamping,
allowing the five prototype chassis to be machined at one time. Once machining was
completed, the parts were then stamped into final geometry. Figure 4 shows a rear view
of an assembled prototype MBC to give a better detail of the motor mounting.
8 |
Virginia Tech | Figure 4. Rear View of Assembled Prototype MBC
Being made from 16-guage mild steel, the prototype chassis are quite rigid for the
application. No extra stiffening is incorporated because the prototype canopy would
provide added rigidity when fastened at assembly. With the prototype chassis design
completed, the prototype canopy was next.
The deck is designed as a welded assembly. A piece of sheetmetal matching the
outline of the chassis forms the base of the canopy. The canopy bridge needs to have the
proper scale width and be rigid enough to support the Pigs and any additional sensors or
hardware. A piece of sheetmetal was stamped into a channel to provide the necessary
strength. The bridge is properly aligned with the base and clamped in place. With a final
recheck of location, the two pieces are MIG welded together. The assembly is fastened
to the chassis by 4 #10-32UNF screws. With the deck fashioned, the dolly travel
mechanism was designed.
Because the dolly travel provides much needed compliance between MBCs to
allow the CHS to snake around mine pillars, incorporating the dolly displacement into the
9 |
Virginia Tech | control algorithms is necessary. The production MBC has a dolly travel of five feet,
requiring a prototype dolly travel of 6 inches. The initial dolly design incorporated
precision ground steel rod and linear bearings. However, this option was quickly
discarded in favor of using a precision drawer slide for simplicity and reduced cost. A
travel stop was needed since the drawer slide is capable of extending to ten inches. A
piece of mild steel stock was welded to the top of the drawer slide as part of the travel
stop, and also to provide a mounting point for a measurement device. A bolt and was
fastened through the deck at a point six inches from the welded bar, so that travel would
be limited by contact between the bar and the bolt. These features provide a simple and
effective solution to the design requirements. Figure 5 highlights the canopy, tag-
line potentiometer, dolly travel and travel stop.
Figure 5. View of Prototype Deck and Dolly Travel Mechanism
Because the Pig is designed as a simple U-shaped channel, stamped from 16-
guage mild steel sheetmetal, the final design consideration for the prototypes was the
10 |
Virginia Tech | development of the pig pin and the associated joint design for coupling the MBC and Pig
together. Since the joint would also have to incorporate a rotary potentiometer with a ¼”
diameter shaft, a suitable flexible couplings were sought. However, precision flexible
couplings turned out to be a rather bulky and expensive option. Therefore, the resulting
solution would use ¼” inside diameter rubber fuel hose, small hose clamps and a
modified bolt. The Pig would be modified to include a close sliding-fit hole for the
modified bolt and the potentiometer mount.
Because the pig pin must sit on top of the dolly slide, the head of a 3/8”-16UNC
bolt was machined flat then drilled and tapped for a #10-32UNF screw. This would
allow the bolt to be fastened to both the deck and drawer slide without affecting the
operation of the drawer slide. The threaded end of the bolt was machined down to a ¼”
diameter to provide a pin-like area to insert into the fuel hose upon assembly, the
remaining thread would be used for loosely fastening the Pig and MBC together with a
teflon locknut.
The potentiometer mount was made from a piece of sheetmetal, stamped into a U-
shape and drilled to accept the potentiometer. The potentiometer mounts are fastened to
both ends of the Pig, making sure that the potentiometer is inline with the pig pin. This
design makes assembly of the joint quite simple. The pig pin is inserted into the
clearance hole on the Pig. The nylon locknut is tightened snuggly, providing just a slight
bit of clearance for rotation. As the potentiometer is fastened to the mount, a piece of
fuel hose is slid down the shaft of the potentiometer and then over the machined section
of the pig pin. The hose clamps are tightened on the pig pin and the potentiometer shaft.
Figures 6 and 7 show the joint before and after the fuel hose is correctly attached.
11 |
Virginia Tech | 2.3 Prototype Electronics
Before any electronic hardware could effectively be specified, it was essential to
identify the sensors, measurement devices, and tasks required for developing an
autonomously navigating prototype CHS. Since the system must gather and transmit
measurement data, a system based on microcontrollers will be used. Analog devices will
only be used as passive components in driver and digital circuitry. An educated estimate
on the types and number of sensors and the functions to perform is crucial to specifying
appropriate and upgradeable microcontrollers for the system. As the computing power
required for the project is still uncertain, using a PC for all computations is the most cost-
effective solution. As computational requirements are more fully understood, the future
test hardware could be modified accordingly.
Itemizing the requirements for the electronics system starts with the basic
function of the electronics system; controlling each of the dc motors needed for driving
an MBC. The initial plan does not include monitoring of the MBC velocity because a lot
of track slippage is expected in mining operations, making accurate measurements quite
difficult. However, if deemed necessary at a later date, the MBC must have the capacity
to measure the each track velocity. There are three displacements per MBC that need to
be measured to determine the CHS configuration, the front and rear pig angles and the
dolly travel. All three will be measured using analog potentiometers, requiring the
prototype electronic system posses analog-to-digital capabilities. For measuring the mine
walls, the prototype electronics must be capable of interfacing with either a SICK Optic
LMS 200 laser measurement device or the LVS (Laser-Video Scanner) being developed
by the VA Tech Team for the prototype. Interfacing with both of these sensors requires
adequate communications capabilities. Finally, the system must also be able to
communicate with a central laptop PC that will receive all measured data, perform the
data analysis and path planning before sending out command data for velocity control of
each MBC in the CHS. Although many requirements of the system have been identified,
only testing will determine if these requirements are sufficient. Because of this
uncertainty, it is important that the system have the ability to easily add new sensors and
15 |
Virginia Tech | functionality with minimal effort. Therefore, a multiple processor system is envisioned
as the best means for achieving a powerful and scalable prototype electronics system.
Networking multiple processors is especially important so that the project does
not become limited by hardware. Such a limitation might require a complete revision or
redesign of the system to add a new sensor or function. As more sensing is needed,
“smart sensors,” or sensors that have their own processors can be added to the network
with reasonable effort. Therefore, processors with built in communications capabilities
are a must. Given all of these criteria, a suitable processor could be specified.
Because the Motorola 68HC11 microcontroller has on-board communications
capabilities and the author had prior experience with the chip, it was investigated as the
first choice. The HC11, as commonly referred, has two on-board serial communications
subsystems, a UART (Universal Asynchronous Receiver-Transmitter) and a SPI (Serial
Peripheral Interface). The UART supports many standards of asynchronous serial
communication between devices by using the proper driver. RS-232 and RS-485 are two
very common and inexpensive standards. The RS-232 standard, which is found on all
PCs, supports point-to-point communication between devices over relatively short
distances. The RS-485 standard provides the ability for multiple devices to communicate
on a single serial line over much greater distances than capable with RS-232. The SPI is
developed for synchronous serial communications between microprocessors and
peripherals. Peripherals are typically memory modules, device drivers, or other
microprocessors. Reviewing the specifications for the many standards for detailed
information is recommended for anyone interested in the subject.
A major distinction between the two protocols is that the SPI is a synchronous
receiver/transmitter; all processors connected via a SPI bus share a common clock signal.
Sharing a clock signal line creates problems when transmitting over long distances due to
noise and other effects. Another difference is that SPI uses a slave select line. When
operating in a master-slave layout, the master processor will drive the slave select line to
a low state (0 volts) notifying the slave processor to commence data transfer. Because the
SPI can be configured in a master-slave relationship between processors, it provides a
flexible means to continually upgrade the system to meet growing demand. These
differences are shown in Figure 9.
16 |
Virginia Tech | MOSI MOSI
MISO MISO
SPI SPI
SS* SS*
Device Device
SCK SCK
SCI Device SCI Device
Rxd Rxd
Txd Txd
Figure 9. Connection Layout for SPI and SCI Devices
In addition to the serial communication features, the HC11 has an 8 channel, 8 bit
ADC (Analog-to-Digital Converter) to measure the potentiometers. The HC11 has timer
functions including the OC (Output Compare) function, which can be used for PWM
(Pulse Width Modulation) signal generation for motor controls. With an optical encoder
attached to each track motor, the IC (Input Capture) feature can be used for velocity
measurement of the MBC by measuring the period between successive pulses from the
encoder output. Because of these features, the HC11 microcontroller was chosen as the
foundation of the prototype electronics system. Instead of developing a custom HC11
controller board, the Motorola MC68HC11EVBU [1] evaluation board was selected as
the platform for the HC11 microprocessor for both the master and slave controllers due to
its low cost and ease of expandability.
Since the HC11 can function effectively in a master-slave configuration using
SPI, it seemed logical that a basic prototype electronics system would contain at least one
master and one slave HC11 controller. A slave HC11 controller would be tasked with
performing the signal generation for motor controls and if necessary, performing velocity
measurements for closed-loop feedback of the motors. The master HC11 controller
would be responsible for gathering and sending sensor data to the control PC and then
parsing the command data from the control PC to the slave HC11 controller. The HC11
17 |
Virginia Tech | is not expected to be powerful enough for computation of the control algorithms, relying
upon a PC for all computations. Figure 10 shows the expandable communications and
control hierarchy developed for the prototype.
LMS 200: LMS 200:
Control Interface:
Laser DME Laser DME
Laptop/LabVIEW
optional optional
Analog Sensors:
MBC Master Front Pig Angle
HC11 MCU Rear Pig Angle
Dolly Travel
SPI Bus
Laser-Video Scanner MBC Slave Laser-Video Scanner
HC11 MCU HC11 MCU HC11 MCU
Motor Drivers: Analog Sensors:
LMD18200 Manual Velocity
Figure 10. Prototype Control Hierarchy of a Single MBC
In Figure 10, a LMS 200 laser measurement device is shown along with Laser-
Video Scanners. Both units will not be operated at the same time on a single MBC,
however, it is possible that future testing will use different sensors on different MBCs.
With the basic hierarchy developed, the electronics hardware could be designed.
The low-level motor controller using a single HC11 board was the first part developed.
The motors are controlled with the PWM signals generated using two OC pins. The OC
channels are TTL outputs and can source only 15ma. Because the dc gearmotors selected
can draw about 1.5 amps under load, a driver was needed to amplify the PWM signals.
There are many options for providing reversible motor control, but a single chip H-
Bridge was desired. The LMD18200T H-Bridge from National Semiconductors is
18 |
Virginia Tech | Figure 11. MBC Master and Slave Controllers Mounted in MBC
2.4 Prototype Software
The low-level controller software forms the foundation of the prototype software
package. It is somewhat like the kernel in a computer operating system; it provides the
low-level interface for handling the various functions and subsystems of the prototype
electronics system. For example, the interface and control program will not directly
provide the velocity control. After analyzing the data, it will generate velocity
commands that are sent to the slave controller. The slave controller will then convert
these velocity commands into the actual PWM signals required to drive the motor.
Since each prototype MBC has a master and slave based on the HC11 evaluation
board, all programming will be completed in assembly language [2,3,4,5]. Assembly
programming is processor specific, meaning that a program written for the HC11 will not
likely be compiled for another processor without major modifications. If the programs
are written in C, then they could be cross-compiled for other processors with reasonable
20 |
Virginia Tech | effort. However, since the prototype electronics system is different than what is expected
for production hardware (Long-Airdox PLC), such portability is not necessary. Any line-
finding algorithms that might be offloaded to an HC11 controller should be written in C,
as these algorithms can be easily ported to the PLC.
The prototype has two modes of operation-manual and automatic, which are
determined by the user setting a toggle switch. In manual mode, the operator will use
two slide potentiometers as a joystick to control speed and direction of an MBC. Manual
mode is used when trying to navigate the prototype to a test area, or when manually
driving the first MBC in a CHS which has other units in automatic mode. The latter
scenario is commonly called “follow the leader” because this is how the autonomous
CHS is expected to operate; all MBCs will follow the front MBC that has a human
operator. When operating in automatic mode, the MBC slave controller will not scan the
joystick, instead it receives velocity and direction commands from the control PC via the
MBC master controller. While operating in automatic mode, the master controller acts as
a “traffic cop;” it is responsible for gathering sensor data, sending the sensor data to the
interface and control program, and finally parsing command data to the slave controller.
When operating in manual mode, the master controller waits for the operating mode to
switch back to automatic mode.
The prototype software has been in continual evolution to meet timelines for
testing. The initial software for testing is somewhat different that what is expected for
the final prototype. The original plan incorporated sensors and measurement devices
developed in-house for use on the prototype, due to the high cost of acquiring similar
technologies from commercial sources. Once the necessary control algorithms and
sensing was tested and verified on the prototype, development on the full-scale model
would commence. Due to the rapid development of the project, two SICK optic LMS
200 units were purchased and delivered before any custom sensors were finished. As a
result, parallel development of hardware and software was necessary for both
configurations.
The first master-slave controllers developed were for use with the LMS 200 laser
measurement device. Figure 12 shows the flowchart of the MBC master controller.
21 |
Virginia Tech | MBC Slave Controller
initialize
system
test mode
no yes
manual
mode?
receive PWM/ read analog
direction via slide pots
SPI
compute PWM/
update PWM/ direction
direction
update PWM/
direction
Figure 13. Flowchart of MBC Slave Controller
The LMS 200s are directly connected to the control PC via RS-485 PCMCIA
cards, while the MBC master controllers are connected to the serial and parallel ports.
Because the LMS 200 measurement device is capable of recording large quantities of
data, it takes much less time to send data directly to the PC than if the data would be sent
through the master controller first. The direct link also benefits the hardware of the MBC
master controller; extra RAM (Random Access Memory) would be needed to provide
suitable storage for all the LMS data. When using the parallel port, it is necessary to use
a parallel to serial converter to interface correctly with the serial port of the HC11.
A modification to the original MBC master controller software was needed to
interface with the Laser-Video Scanners being developed by Todd Upchurch, a member
of the VA Tech Team. Since each scanner has an HC11 controlling the scanner, the
MBC master controller software needed the addition of SPI communications between the
scanners. The master controller commands a scanner to perform a scan cycle and then
receives the data. The data is then sent to the control PC. No modifications were
23 |
Virginia Tech | 2.5.1 Exploration of Software for Prototype Interface
A major decision for the prototype CHS involved the selection of a software
program that would be suitable for use as the main interface and control program. The
interface and control program must run on an IBM Thinkpad laptop PC using the
Windows 98 operating system. The interface and control program is responsible for
gathering all sensor data and analysis of the data for computation of the path planning
and control algorithms in order to issue velocity commands to each MBC in the prototype
CHS. Since the MBC processors need to communicate with the control PC via either an
RS-232 or RS-485 network, serial communications capabilities are a must. A flexible
software package that would allow rapid program development with the ability to
visually display data was desired. Early research efforts examined use of the popular
Visual Basic and C/C++ programs, and their respective capabilities. Both of these
programs are quite powerful and capable, but appear to require serious programming
efforts throughout all stages of project development. Using either of these high-level
software programs is seen more as an end solution to write very optimized code for the
automated full scale CHS.
During this software exploration phase, a new site license agreement between the
College of Engineering and National Instruments for use of their LabVIEW software was
completed. This was agreement was of great interest because LabVIEW is used by many
departments throughout the university in both research and coursework to provide data
acquisition and analysis of experiments. Since the license had been paid for by the
College of Engineering and could be used for the project without cost, it was researched
as a potential interface program.
While other programming systems use text-based languages to create lines of
code, LabVIEW uses a graphical programming language called G to create programs in
block diagram form[6,7]. LabVIEW is a general-purpose programming system with
comprehensive libraries of functions and subroutines for most any programming task,
much like C or BASIC. Since extensive libraries for serial communications were found,
it appeared to be the flexible and powerful software program needed. Like using any new
software package, some time was spent learning how to program in G. After a modest
27 |
Virginia Tech | level of competency was achieved, some simple programs were successfully developed
that enabled data transfer between LabVIEW and an HC11 evaluation board.
In addition to the rapid development of programs, it was discovered that
LabVIEW has some added features that would make it extremely useful for interfacing
and commanding the prototype CHS. LabVIEW has the ability to call or run other
software codes using the CIN (Call Interface Node). This ability to call other software
programs from within LabVIEW has two main benefits. First, since the VA Tech Team
members tasked with writing the path planning and control algorithms are fluent in C, it
is of great benefit that learning a new language is not necessary. Secondly, since the
algorithms are written in C, they can be ported to the Long-Airdox PLC with reasonable
effort. This portability means reduced efforts when converting from prototype to
production software. Given all the benefits, LabVIEW was chosen for the interface and
control program.
2.5.2 LabVIEW Demonstration
Before proceeding with the development of the interface and control program, a
very brief overview of LabVIEW is presented. A LabVIEW program is called a VI, short
for Virtual Instrument. A VI has two “windows”; one is called the front panel and the
other is called the diagram. The front panel is where the controls and indicators are
displayed; it serves as the visual interface for the program. A control is how data and
logic is input to block diagram. An example is a numeric control, which allows the user
to change a particular numerical input. Other types of controls are boolean, string, arrays
and clusters. An indicator is a display showing the output of numeric, boolean or string
data from the program. Great flexibility in the appearance of indicators is available;
indicators can range from a simple numerical output to a liquid level display of a water
holding tank.
The core of LabVIEW programming is conducted on the diagram window. This
is where the block diagram is located. Programming in G appears similar to wiring up an
electronic circuit. Figure 16 shows the front panel and diagram panel of a demonstration
program that displays the speed of a fictitious automobile engine on a tachometer. A
28 |
Virginia Tech | random number generator is used to create random numbers ranging from 0-1. The output
of the random number generator is multiplied by a constant of 7000, which simulates a
maximum engine speed of 7000 revolutions per minute. A dial indicator displays the
resulting engine speed to a dial indicator.
Figure 16. LabVIEW Demonstration Program
Although the demonstration program is a very simple example, it should serve to
show the flexibility and power of the G programming language. The ability to rapidly
update the interface and control program as project development advances is quite a
luxury. As modifications and additional functions are necessary, the programmer simply
makes the necessary change and re-wires the affected portions of the block diagram. The
debugging and error checking features prevents a programmer from making many
mistakes while creating and modifying the block diagrams. Should a VI not produce the
desired results, very powerful debugging tools are available to expedite correction of the
program. These are all highly desirable traits for the prototype development because of
the dynamic nature of software and hardware needs. Only when competency in path
29 |
Virginia Tech | planning and control algorithms has been demonstrated should the efforts shift to writing
production hardware-specific software.
2.5.3 Prototype Interface Development
With a better understanding of LabVIEW, a discussion of the interface code is
appropriate. Since the LMS 200 laser measurement devices were the first sensors
available for measuring the mine walls, the interface and control program was created to
interface with the units. Figure 17 shows the flowchart of the interface and control VI
using the LMS 200 devices.
check sta te o f
S IC K _R eset
Y es
issue rese t
TR U E?
te le gra m
N o
initia lize co m p ort
change to initia liza tion m ode -20h 00 h
change LM S va ria nt - 3B h
change m onito r m ode - 20 h 25 h
request data
receive, form a t a nd
disp lay da ta
call pa th-p lanning a nd
control algorithm s
fo rm a t c o m m a nd
data a nd se nd to
M B C contro ller
Figure 17. Flowchart of Interface Program Using LMS Devices
30 |
Virginia Tech | Chapter 3: SICK Optic LMS 200
3.1 Sensor Background
Although it has been assumed from the early stages of the project that measuring
the distance and orientation of an MBC with respect to the mine walls is essential to
computing a path plan, simulation and dynamic analysis performed Aishwarya Varadhan
and Amnart Kanarat have validated this assumption. Their efforts have established a
control strategy requiring a line-finding algorithm capable of locating each MBC in the
CHS with respect to the mine walls. The evolving algorithm requires measurement
devices with the ability to sample mine walls with multiple data points in less than a
second. To accomplish this task, either an array of point measurement or swept
measurement devices can be used. Ultrasonic sensors and stationary laser devices are
typically used for point measurements. Some laser measurement devices that can
perform sweeping measurements by deflecting the laser beam by rotating optics are
available.
In trying to determine the most suitable technologies, several factors must be
examined. Quite prevalent, ultrasonic sensors can require a lot of expertise to ensure
accurate and reliable operation. Acoustical reverberation from surrounding structures
and cross talk between sensors can be serious problems. Ultrasonic sensors are typically
quite cheap to use, so they have a strong economic benefit for the project budget. The
performance and benefit of a swept laser measurement device appears proportional to
their cost; they are typically quite expensive. The more intelligent the line-finding
algorithm needed to become meant the increasing need for a swept laser sensor. Instead
of requiring an array of point devices to measure the position of an MBC in the mine, a
(cid:176)
single swept laser would be capable of measuring all objects within a 180 range of the
scanner. A decision on the technology to use was not made for some time, so efforts
focused on both developing ultrasonic sensors, developing a swept scanner and procuring
a commercial swept laser measurement device.
Even though testing had been done with the ultrasonic sensors, results from
continued simulation studies showed that a swept measurement device was ideal for the
33 |
Virginia Tech | line-finding and control algorithms. Therefore, the search for a suitable swept laser
measurement device progressed, as did continued development of a prototype swept
measurement device.
One of the pioneers in laser measurement equipment is SICK Optic Electronic.
They produce many different types laser measurement equipment, with LMS 200
appearing to be the best suited to project needs [8]. The LMS 200 device is capable of
producing a 180-degree radial scan with an angular resolution of ¼ degree and a range of
more than 30 feet. The unit is also exceptionally fast, capable of completing the 180-
degree scan in less than 30 milliseconds. Communications between an interface
computer and the LMS is accomplished by a serial link. Depending upon how the serial
cable is configured, the serial output of the LMS will conform to either the RS-232 or
RS-422 standard. Since the RS-485 standard is a superset of RS-422, a direct connection
between the RS-485 communications card in the control PC and the LMS is possible.
Figure 19 shows the LMS 200 laser measurement device.
Figure 19. SICK Optic LMS 200 Laser Measurement Device
As luck would have it, a LMS 200 laser measurement device is owned by the VA
Tech Autonomous Project Team for use on their autonomously navigating vehicles. In
order to benchmark the LMS unit, the Autonomous Team agreed to loan the device for
34 |
Virginia Tech | Table 1. Description of LMS Telegram
Designation Data Width (Bits) Comment
STX 8 Start byte (02h)
ADR 8 Address of LMS contacted
LMS adds 80h when
responding to host
computer
Length 16 Number of following data
bytes excluding CRC
CMD 8 Command byte sent to LMS
Data N x 8 Optional, depends on
previous command
Status 8 Optional, LMS transmits its
status message only when it
transfers data to the host
computer
CRC 16 CRC checksum for the
entire data package
In order to correctly configure and use the LMS 200, these telegrams must be
completely understood and manipulated. The following example is a configuration
telegram sets the baud rate to the maximum speed of 500,000 baud. Note that the request
telegram and LMS response is given in hexadecimal notation.
Interface Program: 02h/00h/02h/00h/20h/48h/58h/08h
LMS Response: 06h/02h/80h/03h/00h/A0h/00h/10h/16h/0Ah
The request telegram is disassembled and listed in Table 2.
Table 2. Disassembly of Interface Program Request Telegram
STX 02h Start character for initiation of transmission
ADR 00h LMS address
LENL/LENH 02h/00h Length = 2 (2 data bytes follow)
CMD 20h Select or change operating mode
MODE 48h Configuration to 500,000 BAUD
CRCL/CRCH 58h/08h CRC 16 Checksum
36 |
Virginia Tech | The LMS response telegram is disassembled and listed in Table 3.
Table 2. Disassembly of Interface Program Request Telegram
ACK 06h Acknowledge receipt of telegram
STX 02h Start character for initiation of transmission
ADR 80h Host address
LENL/LENH 03h/00h Length = 3 (3 data bytes follow)
BMACK_TGM A0h Response to change of operating mode
BMACK_TGM 00h Mode change successful
STATUS
STATUS 10h Status byte
CRCL/CRCH 16h/0Ah CRC 16 Checksum
With a thorough understanding of the telegram structures, virtually any software
program with serial communications capabilities can be made to interface with the LMS.
Thus, development of the first LabVIEW VI was initiated. Because the LMS was factory
configured to send data only on request, the proper telegram to request data was needed.
Luckily, the manual listed the necessary telegram in a section discussing the telegram
structure. The first interface VI written sent a request to send data to the LMS, and then
displayed the hexadecimal data by an indicator. Then the data was manipulated into
decimal and displayed through a polar plot. Within a very short time, an understanding
of LMS operation had been achieved and a simple interface program was written that
enabled more thorough testing of the device, with data acquisition enabling analysis of
the results. The only problem encountered was not being able to completely configure
and use the LMS due to incorrect calculation of the checksum.
3.3 Calculation of the CRC 16 Checksum
Calculating the correct CRC 16 Checksum is essential for correct processing of
interface program requests. The checksum has a unique value for any telegram and is
calculated by the LMS with an algorithm using a polynomial generator. When a telegram
37 |
Virginia Tech | is sent to the LMS unit, it is stored in a data buffer. The CRC 16 Checksum algorithm
computes a checksum based on the data in the buffer. If the resulting checksum matches
the checksum sent with the telegram, the data is valid and an ACK symbol (06h) is
returned to the interface program along with the results of the original request telegram.
However, if there is an error in the data or the original checksum was incorrect, the LMS
will return a NACK symbol (15h). By receiving either the ACK or NACK symbols, the
host computer can determine if rebroadcast of the original message is necessary.
Although somewhat difficult to follow, the checksum is essential to ensuring valid
communications and data transfer between the host and LMS.
Because each telegram has a unique checksum, reconfiguration of the LMS
required the proper checksum. Even though a particular telegram was correct except for
the checksum, the incorrect checksum would cause the LMS to respond with a NACK.
The manual provided the checksum algorithm in ANSI C. Since the initial interface
program did not analyze the LMS response for a valid checksum, only the capability to
calculate a correct checksum for a given request telegram was needed. Therefore, the
checksum algorithm provided in the manual was modified to create an executable
program that would compute the checksum for a given telegram. A simple interface that
would accept the telegram string and output the checksum was created using LabVIEW.
The simple program provided the ability to correctly calculate the checksum for any
telegram, removing any remaining roadblocks to complete interfacing with LMS.
3.4 Development of LabVIEW Interface for LMS 200
The first step in refining the operation of the LMS was to reduce the number of
data points received. Because the unit was currently configured for a 180(cid:176) sweep with
½(cid:176) increments, a total of 733 bytes of data would be sent. The LMS was reconfigured to
reduce the current resolution to 1(cid:176) increments, yielding a decrease in the time required to
update the polar plot because the quantity of data had been cut in half. Other measures to
increase the speed were investigated. Because the LMS unit is capable of completing a
full scan in less than 30 milliseconds, the time required for serial communications can
provide a major bottleneck. Therefore, the next performance upgrade was to change the
38 |
Virginia Tech | baud rate of the LMS to 19,200 baud. When compared to the initial VI, the results were
dramatic; a 4-fold decrease in update time was now attained making the polar plot appear
to update almost instantaneously. However, some problems were encountered with the
interface after the VI was stopped. The power supply to the LMS had to be cycled in
order to restart the VI. Because the VI changed the baud rate from the default 9600 to
19,200 baud, the VI would communicate at 9600 baud when restarted. However, the
LMS would still be expecting communications at 19,200 baud. After cycling the power,
the LMS would reboot at 9600 baud. Looking through the telegram listings, a command
was found so the baud rate could be changed permanently. Thereafter, the LMS would
always reboot at the reconfigured baud rate. The same command could be used if the
baud rate needed to be changed back to the default of 9600. Once the LMS was
reconfigured to always boot at 19,200 baud the VI worked without problems. The next
phase in development of the interface with the LMS incorporated the line-finding
algorithms that were being developed.
Since the line-finding algorithms are written in C, the LMS interface program was
modified to call the algorithms using the code interface node. With the addition of the
line-finding and control algorithms, the new program became the interface and control
program for the prototype and full-scale development.
39 |
Virginia Tech | Chapter 4: Full-Scale Continuous Haulage System
4.1 Full-Scale Introduction
Although the prototypes provide a suitable testbed for project development, Long-
Airdox is anxious to perform testing on their full-scale models. Such testing requires
redirection of efforts and solution of new problems, but is necessary to ensure that
development includes solution of any issues pertaining solely to full-scale equipment.
Long-Airdox support for testing currently pertains mainly to providing full-scale,
production MBCs as units become available. Much scheduling is needed so each phase
of testing on the prototypes is recreated on the production models. Since testing both
prototypes and production models is expected, minimizing effort to complete the
hardware and software for both prototypes and production models is extremely important.
Therefore, as much of the existing prototype hardware and software will be used for full-
scale testing. Especially since the VA Tech team is responsible for providing a majority
of the required hardware for the initial phases of testing. It is expected that future testing
of full-scale MBCs might include hardware that is intended for production use, and will
be provided and supported by Long-Airdox.
4.2 Full-Scale Electronics Development
Since a hardware and software interface had already been developed and tested
on the prototype, the major differences between the full-scale and prototype models had
to be investigated in order to determine how much software and hardware could be
shared. Since the full-scale TRAM LEFT and TRAM RIGHT functions are controlled by
manual operation of lever-actuated valves, a microcontroller-based interface was needed.
The Long-Airdox remote controlled MBC was a logical starting point, so the hardware
used for the conversion was investigated for possible use in the automation project. In
the remote controlled MBC, the manual valve controls are replaced with Apitech Pulsar
VS-series digital pressure control valves. These digital valves require a 33 Hz PWM
signal for actuation and have a fairly simple operation; increasing the PWM duty cycle
40 |
Virginia Tech | Analyzing the flowchart in Figure 20, it can be seen that the full-scale MBC
controller is a blend of both prototype master and slave controllers. Because the LMS
units have a direct link to the control PC, a master processor acting as a “traffic cop” is
not necessary. However, this configuration is designed for flexibility and can be readily
changed to meet needs.
Upon completion and testing of the full-scale MBC controller, the prototype
interface and control VI was modified. Requiring different command output to the HC11
controller is the major difference between the prototype and production MBC interface.
Instead of sending 18 bytes of ASCII like the prototype, the output now sent six bytes of
ASCII – four bytes for velocity and two bytes for direction. With the exception of the
different output to the HC11, the interface program runs identically to the prototype.
Figure 22 shows the communications and control layout for testing of one MBC.
Full-Scale MBC Layout
control PC:
laptop
RS-485 port
serial port
pcmcia card
MBC controller: SICK Optic
HC11 LMS 200 Laser Sensor
valve driver module
SICK Optic
LMS 200 Laser Sensor
TRAM TRAM TRAM TRAM
LT LT RT RT
REV FWD FWD REV
Figure 22. Full-Scale Communications and Control Layout
As with the prototype, adding a second MBC is relatively straightforward. With
one MBC and two LMS units, a PCMCIA RS-485 card and the serial port is used.
44 |
Virginia Tech | Chapter 5: Results and Conclusions
5.1 Results
A 1/10th-scale prototype continuous haulage system was designed and fabricated
to provide a test bed for developing path planning and control algorithms, and testing
sensor technologies. With the construction of the prototype units, efforts refocused on
developing an electronics system capable of providing low-level motor control and
communications with multiple processors and a control PC. As a result of the Authors’
experience with developing an interface between the MBC controllers and LabVIEW, an
interface with the SICK Optic LMS 200 laser measurement was completed. Though
success was only achieved after developing a thorough understanding of how the LMS
unit operates. As a result of the LabVIEW interface development, current interface and
control programs have been based off of the interfaces developed for the MBC controller
and the LMS unit. Development of full-scale CHS hardware and software was required
for performing above ground trials at a Long-Airdox facility. Much of the prototype
control hierarchy was carried to the full-scale design, but a new low-level driver was
required to properly interface with the digital control valves on the full-scale MBC.
Although a total of five MBCs and Pigs have been fabricated, the current status of
the prototype hardware and software used in testing has involved configurations using
either one or two MBCs. As the path-planning and control algorithms advance, more
units in the prototype CHS will be used. Preliminary path-planning and control
algorithms were conducted on the first prototype MBC completed. Testing has
progressed from fairly crude initial runs with the LMS unit, power supply, laptop and
cables duct taped to the MBC. The hallway outside the VA Tech Team office was used
to test navigation of the overloaded MBC. However, testing was quickly moved to the
main hallway because the increased traction provided by the carpeting and the
significantly increased weight of the model from the extra sensors and hardware placed
too much stress on the plastic tracks. With tile floors, the main hallways provides a more
realistic test media because of increased track slippage similar to what is encountered in a
mine. After some successful navigation trials through the hallway, a second MBC
46 |
Virginia Tech | operating in manual mode was added. With the manual MBC leading the way, the
autonomous second MBC is currently being tested to develop and refine the path-
planning and control algorithms. With successful completion of the initial stages of
prototype testing, hardware and software modifications were made in order to recreate
these tests on the full-scale models.
Long-Airdox secured two full-scale MBCs for use before being shipped to their
customer. Behind their Pulaski facility, portions of a mine were laid out using hay bails
and black plastic strung between fence posts. Since space was limited, the mine layout
would permit the MBC to travel along the wall and turn in one direction. With
completion of the mine walls, replacement of manual valves with the digital valves and
power supplied to the MBC, testing commenced. Controlling the MBC in manual mode
with the joystick completed verification of correct wiring and driver. With the hardware
functioning properly, testing of automatic driving proceeded. During the first few runs,
the MBC would navigate the course successfully. However, after a short time of testing,
the MBC would behave erratically when turning corners. Since this type of behavior had
not been experienced with the prototype, there was concern that the algorithms were not
robust enough. To properly assess the situation, the VA Tech Team began
troubleshooting the system to identify the possible source of the problem. Some
additional indicators were added to the interface and control VI in order to observe the
command signals while the MBC was operating. As the MBC traversed the mine layout,
the added displays showed that the MBC was not reacting to the appropriate command
signals. As the MBC would negotiate a turn, the command VI would increase the outside
track speed while decreasing the inside track speed. When the algorithms determined
that it was necessary to resume straight-ahead travel, the MBC would not respond and
would continue to turn. After repeated observation of this behavior, the MBC controller
was switched to manual mode and driven in a manner that would attempt to recreate the
odd behavior. Recreating this behavior under manual control seemed to indicate that the
MBC hydraulics were not operating correctly. After some more debugging, it became
evident that there were problems with the hydraulics system. Further testing was
postponed until the system could be debugged and fixed.
47 |
Virginia Tech | With the pause in full-scale trials while Long-Airdox employees worked to fix the
hydraulics system, efforts resumed on the multiple unit prototype CHS. Because there
were some initial problems with weak power supplies and hasty wiring, some time was
spent cleanly wiring up new power supplies and putting power buses on each MBC to
reduce local wire lengths. With the wiring completed, testing the model resumed with
one manual and one automatic MBC. A second LMS device was added to the automatic
MBC. Current testing with the MBC continues to refine the path-planning and control
algorithms. The addition of closed-loop feedback on the prototype MBC motors has been
raised as a necessity and current developmental efforts are looking at the best way to
incorporate this motor feedback.
5.2 Conclusions
Although the prototype continuous haulage system seemed to be a long time in
the making from the perspective of the author, and probably the other VA Tech Team
members, it seems to have met the requirements quite competently. The flexibility and
benefits of using LabVIEW for the interface and control program were envisioned;
however, not quite to the extent that it has aided the rapid development of this project.
Thus far, the hardware has performed effectively, for both the prototype and full-scale
models. In fact, the full-scale MBC controller has operated rather robustly in an outdoor
environment and has been very reliable.
Being heavily involved with the microcontroller aspect of this project required
much review of various electronic products in the marketplace. As a result of this
exposure, the use of more powerful microcontrollers or single board computers (SBC)
might have been a better solution since the control algorithms are continually growing in
size and complexity. This is especially true because the SBCs could effectively serve as
a lower cost simulator of the PLC in development by Long-Airdox. However, it is
doubtful that a single SBC could be purchased for the price of the combined master and
slave controllers, making budget constraints a potential concern. Additionally, all of the
current hardware and software should continue to be very functional in its current
configuration or with added slave controllers. The motor drivers and Laser-Video
48 |
Virginia Tech | Figure D-6. Diagram Panel Snapshot 5 of 7
Frame 4 of 6: The front pig and rear pig angles, and the dolly travel distance are received
by the interface program using the ‘Serial Port Read.vi’ subvi. Each 8-bit
measurement is represented by two bytes of ASCII in order to use the
‘From Hexadecimal.vi’ to convert data into decimal. Both the front and
rear pig angles use two case structures in order to determine the angle. The
measured value is compared to determine if it is equal to 128. If equal to
128, the outer case structure is TRUE and the resulting angle is 0. If not
equal to 128, the angle is given by (ANGLE VALUE – 128) * .703125.
This is true of both front and rear pig angle measurements. The dolly travel
measurement is converted into decimal. A constant, equal to 6.00 in this
case, is subtracted in order to calibrate the dolly travel to fully closed
position. The resulting value is multiplied by the constant .044444 to
convert into inches. Both the calculated angles and dolly travel distance are
wired to sequence locals to transfer the data to the next frame.
95 |
Virginia Tech | Vita
Bruce J. Wells was born in 1973 in a small Mediterranean fishing village south of
Naples, Italy. After moving to Virginia, he spent his formative years fishing on the
Chesapeake Bay, restoring classic cars and participating in soccer, wrestling and football.
Attending VA Tech seemed a logical choice after graduation from high school because of
the breadth of academic majors found at VA Tech and its reputation as a top party school.
Although originally undecided, Bruce chose to enter the Department of Mechanical
Engineering as a means to gain greater knowledge of building racecars. As a result, he
became heavily involved with the VA Tech Formula SAE car project; a project in which
students design, construct, test and compete in an open-wheeled, ‘Formula-1’ style
racecar. After receiving a B.S.M.E. in May of 1995, Bruce went on to work as a rocket
scientist for almost two years before deciding to return to graduate school. Since
returning, he has become heavily involved with electronics and microprocessor-
controlled gizmos, in addition to strengthening his mechanical design abilities. However,
his dedication to graduate school had a large impact upon his fishing and outdoor
adventures. Thus, Bruce will be looking forward to working near a large river on which
he can kayak and fish until his heart is content.
123 |
Virginia Tech | Developing a Novel Ultrafine Coal Dewatering Process
Michael H Huylo
Abstract
Dewatering fine coal is needed in many applications but has remained a great challenge.
The hydrophobic-hydrophilic separation (HHS) method is a powerful technology to address this
problem. However, organic solvents in solvent-coal slurries produced during HHS must be
recovered for the method to be economically viable. Here, the experimental studies of recovering
solvents from pentane-coal and hexane-coal slurries by combining liquid-solid filtration and in-
situ vaporization and removing the solvent by a carrier gas (i.e., drying) are reported. The filtration
behaviors are studied under different solid mass loading and filtration pressure. It is shown that
using pressure filtration driven by 20 psig nitrogen, over 95% of solvents by mass in the slurries
can be recovered, and filtration cakes can be formed in 60 s. The drying behavior was studied
using nitrogen and steam at different temperatures and pressures. It is shown that residual solvents
in filtration cakes can be reduced below 1400 ppm within 10 s by 15 psig steam superheated to
150C, while other parameter combinations are far less effective in removing solvents. Physical
processes involved in drying and the structure of solvent-laden filtration cakes are analyzed in light
of these results. |
Virginia Tech | Developing a Novel Ultrafine Coal Dewatering Process
Michael H Huylo
General Audience Abstract
Coal particles below a certain size are discarded to waste tailing ponds as there is no
economically viable method for processing them. However, a new process called hydrophobic-
hydrophilic separation offers a solution to this problem. A hydrophobic solvent is used to displace
water from a coal-water slurry, and it is then easier and cheaper to filter and dry this new coal-
solvent slurry. In this work experimental studies of recovering solvents from pentane-coal and
hexane-coal slurries by combining filtration and drying are reported. The filtration behaviors are
studied under different solid mass loading and filtration pressures. It is shown that using pressure
filtration driven by 20 psig nitrogen, over 95% of solvents by mass in the slurry can be recovered,
and filtration cakes can be formed in 60 s. The drying behavior was studied using nitrogen and
steam at different temperatures and pressures to evaporate any remaining solvents. It is shown that
the remaining solvents in filtration cakes can be reduced below 1400 ppm within 10 s by using 15
psig steam superheated to 150C as a drying medium, while other parameter combinations are far
less effective in removing solvents. Physical processes involved in drying and the structure of
solvent-laden filtration cakes are analyzed in light of these results. |
Virginia Tech | Acknowledgement
I would like to thank my three co advisors Dr. Qiao, Dr. Yoon, and Dr. Noble for all of
their guidance and support. I am especially grateful to Dr. Qiao for serving as my primary
advisor. I am thankful for the opportunity to have been hired as a graduate research assistant, and
to have been able to work on this project. I would also like to thank Dr. Liu for serving on my
committee.
Additionally, I appreciate all the assistance I received in performing my research from
Dr. Kaiwu Huang, Dr. Serhat Keles, Jim Reyher, Dr. Mehdi Ashraf-Khorassani, Glen Brock, and
Chad Sechrist.
Thank you to my friends and lab mates Dr. Hai Wu, Seokgyun Ham, David Moh,
Hongwei Zhang, Jacob Wilson, Xin Wang, and Mehran Islam for their support, collaboration,
and companionship.
Thank you to my mother, sister, and brother for their support and my father for his
guidance in the engineering profession.
The support of the United States Department of Energy, National Energy
Technology Laboratory through NETL-Penn State University Coalition for Fossil Energy
Research (UCFER, contract number DE-FE0026825) is gratefully acknowledged.
iv |
Virginia Tech | Chapter 1. Introduction
Coal has been a significant source of energy production in the United States for centuries.
Widely available and relatively cheap, it was the dominant domestic fuel source for electricity
production until being surpassed by natural gas in the last five years. Total coal consumption in
the U.S. has fallen from 1 billion short tons in 2010 to less than 500 million short tons in 2020 [1].
However, coal continues to produce more electricity in the U.S. than renewables or nuclear
sources. While electricity generated from coal is projected to decrease as a percentage of overall
energy generated, the rapid increase in world energy generation will allow overall coal use to
remain relatively consistent [1]. In addition to this, the market for high-quality metallurgical coal
is growing. Metallurgical coal is required for coke production and steelmaking processes.
Therefore, there continues to be demand for high-quality coal production, and the industry remains
worthy of technological investment as far as remediating waste and environmental hazards [1].
As coal mining developed from underground miners and carts to high-volume large
machinery, the quality and particle size of the coal produced has decreased. This has resulted in
the need for more efficient processing and cleaning. Raw coal removed from the ground contains
many impurities that must be removed, and this removal process can be achieved in many ways
depending on particle size.
Larger particles can be separated from impurities based on differences in density. Medium size
to smaller particles can be separated using cyclones, bed separators, or spirals. The smallest and
most challenging to process particles require using flotation cells. This usually involves using air
bubbles injected into a water tank to carry coal particles to the surface where they can be extracted.
The waste particles are left behind at the bottom of the tank. The smallest particles, typically below
1 |
Virginia Tech | 40 microns, are rejected to waste because there is no economically effective means of processing
them. As of 2002, 70-90 million tons of small and fine coal tailings were produced in the U.S. each
year but discarded due to the difficulties of dewatering them [3]. The discarded fine coal not only
causes a significant economic loss but also creates environmental pollution concerns. As of 2002,
it was believed that there might be up to 2 billion tons of ultra-fine slurry located in waste tailing
ponds [3]. While overall coal production has slowed since 2002, the total mass stored in tailing
ponds has only increased.
In addition to waste minerals and particles that must be removed, moisture is also
considered a contaminant. This becomes even more problematic when dealing with flotation-sized
particles because they are immersed in water in the flotation tank. The excess water then becomes
very difficult and expensive to filter and evaporate, as described later in this section.
Dewatering of particulate materials is an essential operation not only in coal, but nearly all
other mined commodities. Further, other diverse applications such as pharmaceutical
manufacturingalso require dewatering. Unfortunately, existing dewatering techniques often suffer
from high cost, low scalability, and low efficiency. Consequently, many industries are still
significantly hindered by the lack of effective dewatering technologies. For example, in coal
mining, coal particles less than 1 mm in size account for approximately 10% of the total product
but can contain more than one-third of the total moisture [5].
Presently, there are two main strategies to dewater fine coal. One is to thermally evaporate
water using fluidized beds, multi-louvered systems, or flash-type systems [6]. These methods are
often costly and can produce fugitive dust and toxic elements that can escape into the environment.
Indeed, it has become difficult to obtain permits for thermal dryers in the U.S. due to environmental
requirements [2]. While thermal dryers can provide a dry final product (moisture <10%), coking
2 |
Virginia Tech | properties of coal are negatively affected, and the energy and installation cost make the drying
process economically inviable except in rare circumstances. Usually, the dryers operate using
convection via hot combustion gases to dry wet fine coal products.
The other strategy is to use mechanical means such as filters and centrifuges. The mechanical
approach is inefficient due to the high pressures needed. According to Poiseuille's equation, which
governs fluid flow through a filter cake, a ten-fold decrease in pore size (and thus particle size)
would require a 104-times increase in pressure drop (ΔP) across coal filtration cakes to obtain the
same dewatering rate. In effect, mechanical dewatering has reached its limit, which is partly why
the industry continues to discard coal fines to impoundments. It is apparent that pressure filtration
is not viable for dewatering very small coal particles, and a new method is needed.
Recently, a team in the Mining and Minerals Engineering department at Virginia Tech
developed a novel dewatering and cleaning process known as hydrophobic-hydrophilic separation
(HHS), which has no lower particle size limit in solid-solid separation and produces practically
dry coal in solid-liquid separation [7-9]. To begin the HHS process, a nonpolar solvent, typically
short-chain alkanes with low surface tension and boiling point (e.g., liquid hexane), is introduced
to a water-coal slurry. This causes the coal particles to move from an aqueous phase to a solvent
phase.
The transfer of hydrophobic particles from the water phase to the organic solvent phase is
thermodynamically spontaneous and depends on surface tensions, contact angles, etc. Mechanical
mixing can be introduced to accelerate the water-solvent phase change process. Calculations for
the adsorption of a hydrophobic particle to the oil-water interface are similar to the adsorption of
a hydrophobic particle to an air bubble, as used in froth flotation. However, the greater contact
3 |
Virginia Tech | angle provided by the oil-solid interface leads to HHS being more effective at coal cleaning than
air bubble froth flotation [2].
Some coal particles will transfer from the aqueous phase to the solvent phase as independent
particles. However, there will be remaining agglomerations that have trapped water droplets. This
phenomenon requires using a reactor to vibrate the mixture and free any trapped water droplets.
Once the particles are entirely in the solvent phase, the water and solvent are separated by gravity.
The replacement of the water with solvent is not dependent on particle size. Therefore the cost of
dewatering does not grow exponentially as particle size decreases, which has plagued current
technologies.
Next, solvents are recovered from the solvent-coal slurry. To recover the solvents, a filtration
step is used first (see Figure 1a). Here, the slurry is placed above a porous filter, and an inert gas
is used to drive solvents through the filter. As the gas displaces liquid solvents, a filtration cake
gradually forms on the filter. Due to the high gas pressure, liquid solvents are driven through the
cake continuously until the gas breaks through the filtration cake. The filtration rate is determined
by pressure drop, capillary pressure of solvents within the cake structure, and solvent viscosity
(see Eq. 2 in Section 3.1) [10]. In past HHS experiments, N₂ has been used to drive filtration [11,
12]. At the end of the filtration step, most of the solvent initially present in the solvent-coal slurry
can be recovered. However, some residual solvents remain trapped inside the filtration cake.
Because solvents are expensive, these residual solvents must be recovered for the HHS technology
to be economically viable.
At present, HHS experiments have utilized thermal dryers with electrically powered screw
conveyors to vaporize and recover the residual solvents. These have proven to be very effective at
removing solvents from the filtration cake. However, they are not ideal when scaled up for pilot
4 |
Virginia Tech | plant or commercial use due to their high equipment and installation costs, as well as difficulties
in integrating with the equipment for filtration of solvent-coal slurries. It is necessary to develop a
more efficient means of evaporating spent solvent to commercialize the HHS process.
It would be ideal to develop an in-situ solvent recovery scheme that integrates the above liquid-
solid filtration step with a solvent vaporization and removal step in a single device. Following the
first filtration step shown in Figure 1a, a second step is introduced: a carrier gas is pumped through
the filtration cake to vaporize and remove the residual solvents (see Figure 1b). The second step is
hereafter referred to as the drying step. The envisioned scheme will allow significant equipment
savings and easy integration into existing filter systems. Potentially useful carrier gas includes
nitrogen, heated nitrogen, and superheated steam. This is due to the gases being inert, and is based
on previous HHS work conducted by other students. The speed and effectiveness of the solvent
recovery in the second step depend critically on the distribution of liquid solvents inside the
filtration cake. After gas break through, the distribution of nonpolar liquids in a porous cake with
micron-sized, complex-shaped particles is not well understood yet. There are two possible limiting
scenarios (see Figure 1b). In the first scenario, the residual solvents form a continuous film
spanning across the surface of carrier gas pathways. In the second scenario, the residual solvents
exist as isolated clusters that are sparsely dispersed in the filtration cake. Solvent recovery is
expected to be facile in the first scenario but more difficult in the second scenario.
5 |
Virginia Tech | Figure 1: Two-step, in-situ recovery of nonpolar solvents from solvent-coal slurries. In Step I, an N₂ gas
drives solvents through a filter paper/cloth, forming a filtration cake (panel a). In Step II, which starts after
the gas breaks through the filtration cake, a carrier gas is pumped through the cake to vaporize and remove
the residual solvents (panel b). The schematics in (b) show two limiting scenarios that are possible for the
distribution of residual liquid solvents (colored in green) at the beginning of Step II.
Previous drying studies of porous materials display some characteristic behaviors. Typically,
drying occurs in two stages. First, there is a “Constant Rate Period”, where moisture content of a
sample decreases linearly with time. This stage ends when moisture reaches a point called the
“Critical Moisture Content”. Upon reaching this point, the drying rate changes to the second stage,
the “Falling Rate Period”. Here, the drying rate continues to decrease until reaching zero at the
point where sample moisture reaches an equilibrium state with the drying medium [13, 14]. The
behavior of solvent removal from a porous particle cake driven by a carrier gas has not been studied
previously, and it is unknown if these same characteristics occur.
It is the goal of this work to experimentally investigate the envisioned in-situ solvent recovery
scheme. The operation of this scheme involves many parameters such as filtration pressure, solid
loading in slurry, and type, pressure, and temperature of the carrier gas. These parameters will be
explored to determine solvent recovery efficacy. Because the envisioned scheme is intended for
6 |
Virginia Tech | Chapter 2. Materials, Experiment Setup, and Methods
In the in-situ solvent recovery scheme shown in Figure 1, most of the solvent is recovered from
the slurry during the filtration stage. Only a small remaining portion is removed from the filtration
cake via vaporization during the drying stage. To simulate the in-situ solvent recovery process in
laboratory experiments, first, a slurry was prepared and poured into a pressure cylinder. Next, a
filtration cake was formed by pressure filtration, during which liquid solvent was recovered. Last,
any residual solvent in the filtration cake was vaporized and carried out of the filtration cake by a
drying gas stream. Experiments were performed for both the filtration and drying stages, and the
experimental details are provided in this section.
2.1 Materials
The coal was originally prepared from a screen bowl effluent in a commercial plant. The as-
received material was processed through a pilot-scale HHS plant housed in the VT Mining and
Minerals Engineering research facility. The particle size distribution of the clean coal sample was
found from using a Microtrac particle characterization analyzer. A small sample of several grams
was loaded into the machine and the percentiles were determined from light scattering technology.
The coal sample has a D₈₀ particle size of 43.25 μm. The overall particle size distribution for the
sample is displayed in Table 1. Additionally, Figure 2 shows a cumulative line plot of the particle
size distribution.
8 |
Virginia Tech | Hexane was chosen as the primary solvent to be evaluated for this work. Pentane was also
evaluated to serve as a reference. Relevant properties of these solvents are shown in Table 2.
Table 2: Properties of solvents used in this work at 20 °C and 1 atm.
Solvent Density Viscosity Boiling point Vapor pressure
(g/cm3) (mPa·s) (°C) (kPa)
Pentane 0.626 0.250 36.0 57.3
Hexane 0.659 0.310 69.0 16.0
2.2 Filtration tests
2.2.1 Experimental Setup and Methods
Filtration tests were performed to determine a reasonable filtration pressure. In these tests,
nitrogen (N ) gas was used because our solvents are combustible. In most tests, pressure filtration
2
was conducted using pressurized N to displace liquid solvents from the coal slurries. It is desirable
2
to use low applied pressure as the energy cost of filtration rises sharply with increasing pressure.
However, reducing the applied pressure increases filtration time and decreases the throughput in
practical operations. Table 3 summarizes the filtration pressure and other filtration parameters used
in the filtration tests.
Table 3: Conditions for the pressure filtration kinetics experiments.
Solvent Pressure (psig) % Solids by weight Coal Mass (g)
Pentane 20, 40, 60 10, 15 25
Hexane 20, 40 ,60 10, 15 25
10 |
Virginia Tech | Before the filtration experiments, a coal slurry was prepared. 25 g of coal are weighed on
a mass balance. Next, the necessary solvent volume is measured in a graduated cylinder based on
the target solid weight fraction of the slurry (10% and 15% were adopted in this study). As
required by the HHS process, the solvent is typically hydrophobic hydrocarbons. To be viable for
pilot plant use and later commercial use, the solvent must have a relatively low viscosity so that
filtration can be performed rapidly while not being overly volatile due to safety considerations.
Next, the coal was placed into a glass beaker with a magnetic stir bar and the solvent was poured
over it. The solution was mixed on a magnetic stir plate for 8 minutes before use in the filtration
tests.
Figure 3 shows the schematic of the apparatus assembled for measuring solvent filtration
kinetics. The apparatus consists of a pressure filtration cylinder, a filter cloth and paper, a solvent
collection beaker, and a mass balance. The cylinder measures 8 in. high, and has a 2.5-in. ID and
3.5-in. OD. At the top of the cylinder there is a 0.25-in. inlet where the nitrogen was injected. At
the base of the cylinder were a filter cloth and a 5-micrometer filter paper used to form a particle
cake. Downstream of the filter cloth/paper was the bottom cover, and it has a 0.25-in. plastic outlet
hose, which directed the filtered solvent into a glass Erlenmeyer flask resting on a mass balance.
The mass balance was connected to a computer with data logging software, which recorded the
solvent collected every 0.2 seconds. The resulting data was used to develop filtration plots.
11 |
Virginia Tech | Figure 4: A photo of the filtration testing setup.
In Figure 4 the mixing beaker is shown on the left side of the photo. It is located on top of
the magnetic stir plate, and a stir bar is inside of the slurry in the beaker. The pressure filter is
located on the right side of the photo in the fume hood. The top cap is removed and located on the
left. Yellow coiled nitrogen gas tubing is to the right of the cylinder, and is connected to a nitrogen
reservoir tank located outside of the photo. In front of the hood the solvent collection beaker is
placed on top of a balance. A cable runs from the balance to a laptop is not shown in the photo.
The laptop utilizes software to record the amount of solvent collected versus time. Figure 5 is a
photo of a coal cake after it was removed from the cylinder at the completion of filtration.
13 |
Virginia Tech | Figure 5: A coal cake removed from the pressure filtration cylinder after the completion of an experiment.
In addition to the pressure filtration presented above, one vacuum filtration test was also
conducted. For large-scale, commercial applications, vacuum filtration is often more economical
than pressure filtration. It is thus useful to determine whether a filtration cake formed by high-
pressure nitrogen filtration dries differently than that of a cake formed by vacuum filtration. To
accommodate vacuum testing, the tube at the discharge of the cylinder was connected to a vacuum
pump inlet. A second tube was run from the vacuum pump outlet to the collection beaker. Below
is the detailed step by step procedure for conducting the filtration testing.
2.2.2 Experimental Procedure
1. Prepare the coal-solvent slurry by weighing ten or fifteen percent by mass coal sample;
2. Pour the ninety percent or eighty five percent by mass solvent into the mixing vessel with
the coal sample;
3. Place mixing vessel on the mixing plate for eight minutes and cover with glass dish to
prevent splashing and unwanted evaporation;
14 |
Virginia Tech | 4. Place the collection beaker onto the digital balance;
5. Prepare the mass balance analysis software on the connected laptop;
6. Place a new single use filter paper onto the reusable filter cloth and install both on the
bottom screw cap of the pressure filter;
7. Connect the nitrogen hose to the top of the pressure filter;
8. Set the desired nitrogen delivery pressure on the pressure gauge;
9. Wait until the eight-minute mixing time has been reached;
10. Turn off the mixing plate;
11. Remove the mixing vessel from the plate and pour the slurry into the pressure vessel;
12. Place the top screw cap onto the cylinder and tighten;
13. Start the mass balance software on the laptop;
14. Open the nitrogen shutoff valve;
15. Inject nitrogen into the pressure cylinder until the gas has broken through the cake and
there is no more solvent flow into the collection beaker;
16. Turn off the nitrogen shutoff valve;
17. Turn off the mass balance software;
18. Save the mass balance file with the appropriate pressure, solvent, and solids percentage;
19. Remove the bottom screw cap;
20. Dispose of the cake sample and paper filter;
21. Remove the top screw cap;
22. Clean the inside of the cylinder with water and disposable towel;
23. Wait until the solvent and cylinder has returned to room temperature before performing
another trial;
15 |
Virginia Tech | 2.3 Solvent vaporization and removal tests
As seen in Figure 1 in the introduction, the solvent trapped in micro-capillaries is difficult to
remove by filtration. Instead, the residual solvent in the particle cake formed via filtration must be
vaporized and then flushed out of the cake by a carrier gas. A key parameter of this process is the
choice of carrier gas. Options for solvent vaporization mediums in HHS are limited to those
preventing combustion. Two potentially feasible options are nitrogen and superheated steam, both
of which have relative advantages and disadvantages.
Nitrogen is easy to work with and requires fewer design considerations than superheated steam.
A nitrogen reservoir tank along with a pressure gauge/controller and some plastic tubing are all
that is needed to deliver nitrogen gas for drying. However, it has a lower heat capacity and may
require very high delivery temperatures, or very high gas flow rates which lead to high required
pressures for drying.
Steam has a higher heat capacity, and at similar pressures and temperature, may provide faster
drying. When drying with superheated steam, several physical processes are involved. First, the
coal cake to be dried will have its temperature raised to the saturation temperature of the steam.
As this happens, some of the steam will condense on the coal and the surfaces of the drying vessel.
Meanwhile, the solvent in the cake is vaporized. Once the material has reached the steam saturation
temperature, the condensed water evaporates back into the superheated steam. The condensation
from the superheated steam and re-evaporation of condensed water allows this method to control
both the solvent content and water content in the dried coal. This is advantageous because if the
coal is too dry, it poses a safety hazard during transportation. The disadvantages of superheated
steam are that it requires a more complicated production process and added considerations for
16 |
Virginia Tech | dealing with condensate, among other issues. Both nitrogen and superheated steam will be tested
as part of this work.
Superheated steam-based drying is a well-studied, developed technology used in other drying
industries. It is frequently utilized in drying food, grains, and minerals [16-25]. Existing steam
drying works have several differences from this work. Most use superheated steam to dry products
containing water, but not organic solvent. Additionally, most works focused on drying using
fluidized beds, impingements jets, rotary drums, or belt systems as opposed to a pressure cylinder
[17, 20-23, 25, 26]. The process studied here is unique in that vaporization is step two of a two-
part solvent recovery method. Therefore, drying is conducted in the same vessel used for filtration
in step one; superheated steam flows through the material to be dried (coal cake), and the drying
product remains in contact with the pressure vessel enclosure.
2.3.1 Experimental Apparatus Preliminary Design
The apparatus shown in Figure 3 was modified so that it can be used to perform both filtration
and solvent vaporization and removal. A preliminary design for the modification of the apparatus
is shown in Figure 6.
17 |
Virginia Tech | Figure 6: A preliminary design schematic of modifications to the filtration equipment to accommodate
nitrogen filtration, nitrogen drying, heated nitrogen drying, and superheated steam drying.
Initially the preliminary design shown in Figure 6 was given to several contractors and
equipment suppliers to be competitively bid. After receiving bids, it was determined that some
changes would need to be made to this design to be within the project’s financial budget. Careful
decisions were made to allow cost reduction without drastically reducing the quality and accuracy
of the proposed experiments. Originally, it was desired to purchase a factory-made heat exchanger
with a factory provided heating source and controls as shown in number thirteen and number
eleven in Figure 6. This item alone was worth twice as much as the steam boiler itself and a
decision was made to remove it from the proposed system. In its place, a plate heat exchanger, and
electrical heating tape with a thermostat were purchased separately. This reduced the cost by a
18 |
Virginia Tech | factor of twenty and still allowed accurate temperature control. Thermocouples were added
upstream, downstream, and within the plate itself to provide both temperature control and
monitoring. Later in this section, calculations are provided to select both the heat exchanger length,
and the output power of the electrical heating tapes.
The boiler feedwater design proved to be an additional cost issue. Connecting to the
building water supply piping required the work of a licensed plumber and purchasing additional
backflow safety valving to satisfy local building code requirements. It was determined that it would
be cheaper and more effective to use a small water reservoir to provide feed water to the boiler. It
was anticipated from previous data that drying should not take longer than sixty seconds. Given
that the boiler reservoir can be set to eighty psig, and is then regulated to a lower pressure
downstream, many trials can be run before the feed pump needs to turn on and refill the boiler.
Later when this system is scaled up to a pilot plant, or commercial use, a constant water supply
from a building source will be required, but that is not the case for bench scale experiments. Last,
before finalizing this change, it was verified that the boiler feedwater pump’s net positive suction
head was below that of the head provided by atmospheric pressure.
Two options are presented when purchasing a steam boiler. The boiler heating source can
be electrically powered, or it can generate steam through the burning of natural gas or some other
fuel. For large scale commercial use the energy savings would necessitate the use of a gas boiler,
but at the bench scale an electric boiler offers several conveniences. First, there are no products of
combustion that require venting with an electric boiler. Due to the location of the lab where these
experiments will be performed, venting products of combustion to the outdoors could cost upwards
of ten thousand dollars. Second, an electric boiler does not require installing natural gas valving,
piping, and burner controls. It is far cheaper and more convenient to purchase an electric boiler
19 |
Virginia Tech | and use a twenty-foot flexible power cable. These advantages lead to the decision to choose the
electric boiler option.
Due to the relatively small flow rate of steam required to dry such a small sample, it was a
safe assumption that the smallest available boiler in the product line would have an adequate
capacity. However, a precautionary flow rate test was performed to provide verification. A gas
flow rate measuring device was acquired, and two tests were run to determine the flow rate of
nitrogen through a completely formed particle cake. At 15 psig the flow rate was 15,000 sccm
(standard cubic centimeter per minute.). Knowing the viscosity of nitrogen and steam, it is possible
to roughly predict the flow rate of steam at 15psig, and the result is 13,700 sccm. Multiplying this
flowrate by the density of 15 psig superheated steam, it is found that no more than 1 kg/hr of steam
should be required for drying the experimental coal cake. The smallest available electric boiler
produces approximately 4kg/hr, and therefore will be adequate. The electric steam boiler that was
purchased for the experiments is shown in Figure 7. It was delivered on a wood pallet and included
with a boiler water feed pump.
20 |
Virginia Tech | Figure 7: The electric steam boiler purchased for use in the drying experiments.
The purchased steam boiler was delivered with a 120-volt single phase boiler feed pump
and a pressure control system ranging from 0 to 100 psig. The boiler feed pump can be plugged
into a traditional wall outlet. The electric heating source itself is 480-volt 3 phase and will need to
be wired by an electrician. After the steam boiler was purchased, it was necessary to determine the
heat exchanger length required for superheating the steam and heating the nitrogen gas.
Given that an estimated flow range was determined from experimental measurement, this
value was used to calculate the required length of the heat exchanger used to superheat the steam
and heat the nitrogen. The heat exchanger chosen was a copper tube that makes several passes
through a steel plate. The plate itself is heated to a required temperature by an electric heating tape
with a thermostat control. Assuming the plate will have a uniform temperature, and that the copper
tube inside the plate has a uniform wall temperature, a calculation can be performed to determine
21 |
Virginia Tech | the length required to raise the temperature of the fluid flowing through the exchanger to near the
temperature of the pipe wall. This calculation was performed using Eq. (1) [27] which models fully
developed flow of a fluid through a round pipe with constant wall temperature.
𝑁𝑢
𝑇₀−𝑇ₛ(𝑥) = (𝑇₀−𝑇₁)exp (cid:4680)−𝛼∗ ∗(𝑥−𝑥₁)(cid:4681) (1)
𝑟(cid:2870)∗𝑈
(cid:2868)
In Eq. (1), T₀ is the temperature of the fluid entering the heat exchanger, T₁ is the constant
temperature of the heat exchanger’s pipe wall, and 𝑇ₛ is the desired temperature of the fluid at the
exchanger outlet. For the purposes of these experiments, no more than 50 degrees of superheat will
be necessary. The temperature limits of the system and its components are a limiting factor in
performing these experiments. 50 degrees of superheat in addition to the saturation temperature of
steam at the highest desired pressure testing will bring us close to these temperature limits,
therefore the left-hand side of the equation is equal to 50 degrees. If the temperature of the fluid
leaving the heat exchanger can come within 2 degrees of the pipe wall this will be a satisfactory
result, therefore (𝑇₀−𝑇₁) is equal to 2 degrees. The flowrate of the fluid through the cake, (U)
has been measured experimentally. The pipe radius (r₀), gas thermal diffusivity (α), and kinematic
viscosity (μ) are known. Solving Eq. (1) using these terms results in a distance (𝑥−𝑥₁) of
approximately 18 inches. The next size up after 18” is the 24” model. The 24” heat exchanger was
chosen for purchase and will provide sufficient heat exchange surface and a factor of safety. Next,
the electric heating source for the heat exchanger can be chosen.
The easiest electric heating source to be used for the heat exchanger is a flexible electric
heating tape with a built-in thermostat. Various lengths, wattages, thicknesses, and voltages are
available. Knowing the plate volume, density, and material specific heat, the heat transfer equation
can be used to compare the required heating times of several heating tape options. All three of the
22 |
Virginia Tech | options would be capable of heating the heat exchanger to the required temperature but will vary
in heating times. To perform the experiments in reasonable time frame it is desirable that the
heating take no more than 10 minutes. Three potential options are listed in Table 4.
Table 4: Potential options for heating supply to the heat sink.
Option # Thickness Length Wattage Time required
1 0.5” 6 ft 216 17 min
2 1” 4 ft 144 25 min
3 1” 6 ft 432 8.5 min
Option 3 was determined to be the most practical to reduce wait times between experiments, and
was purchased and installed with the heat exchanger.
Last, it would be helpful to perform a calculation to determine the maximum pressure
experienced by the stainless-steel cylinder in a situation where the entrained cake solvent is
evaporated instantly. The results of this calculation will determine the worst-case high-pressure
scenario. If the resulting pressure is within the allowable range of the pressure cylinder, then there
will not be any danger of the cylinder bursting during testing. This calculation is performed strictly
as a safety concern. Assuming the vaporized solvent adheres to the ideal gas law, and knowing the
amount of liquid solvent volume in a saturated filter cake, one can calculate the pressure inside the
cylinder if that volume of entrained solvent were instantly vaporized. It was calculated that the
highest possible pressure experienced by the pressure cylinder is 12 psig above the filtration/drying
pressure. This is far below the allowable pressure of 200 psig for the cylinder. The possibility of
the cylinder bursting due to rapid solvent evaporation is confirmed to not be a safety concern.
23 |
Virginia Tech | 2.3.2 Experimental Apparatus Fabrication
After having performed all necessary calculations, the required materials to construct the
steam system were purchased. An electric steam boiler (Sussman MBA series 3 kW model) is used
to provide the steam. It can produce 9 lbs. per hour of saturated steam. It uses a 120-volt single-
phase boiler feed pump and a pressure control system ranging from 0 to 100 psig. The electric
heating source is a 480-volt 3 phase electric resistance heater, and the heat exchanger is ASME
rated up to 100psig. The pressure regulating valve has a 3 to 25 psig setting range and a temperature
rating up to 205°C. It is a McMaster-Carr #4674K63 cast iron model with 0.5-in. inlet and outlet.
It has a bronze diaphragm with a PTFE seal and includes an internal strainer.
Downstream of the pressure regulating valve is a shutoff valve arrangement and a 4-pass
high contact cold plate (AAVID Thermalloy). The plate is constructed of extruded aluminum with
9.5-mm. copper tubing. It is 5-in. wide by 12-in. long and 0.55-in. deep. The plate is preheated to
the desired temperature and functions as a heat sink during the drying experiment. Its mass is large
enough so that the gas temperature at its exit is maintained at the desired temperature setting during
the short drying cycle. The gas temperature at the exit is within 2 °C of the plate itself. Hot system
piping/hosing and the cold plate were wrapped in an electrical heat trace heating tape. The heating
tape (model HSTAT101006, BriskHeat) has a thermostat control and provides electric heat as
required to raise the temperature at the cold plate, and to maintain temperatures and prevent
condensation in piping/hosing.
In addition to the above major components, Nickel-plated brass-bodied check valves with
stainless steel springs were installed to protect the nitrogen and steam reservoirs. The check valves
are 0.375-in. piston type. System piping includes both 0.5-in. type K copper and 0.5-in. cast iron.
24 |
Virginia Tech | All hot parts of the system except for the cylinder are insulated with 2-in. rigid fiberglass insulation
with vinyl facing suitable for temperatures up to 230°C.
Omega type-K plug thermocouples were used to monitor temperature at various points in the
system. They are suitable for temperatures up to 650°C and have a 6-foot cable insulated with
fiberglass and covered with a stainless-steel sheath. Thermocouples are wired to a MadgeTech
TCTempXLCD datalogger to record temperatures throughout the experiment. The datalogger has
four channels, ranging from -270 to 1370°C for type K thermocouples, is accurate to within +-
0.5°C, and can record temperatures every 0.1 seconds
Steam and nitrogen pressures in the cylinder are measured with an Ashcroft commercial
pressure gauge suitable for steam, rated for use up to 100psig, and accurate to within +-3% of span.
Figure 8: A fabrication design schematic of the modifications to the filtration equipment to accommodate
nitrogen filtration, nitrogen drying, heated nitrogen drying, and superheated steam drying. This design will
be used for preliminary system testing and may require changes during troubleshooting.
25 |
Virginia Tech | In Figure 8, the combined steam and nitrogen drying system is shown. Starting at middle
left, feedwater is stored in the reservoir open to atmosphere. From here, water is pumped by the
feedwater pump, through a check valve, and into the boiler. On the front of the boiler system there
is a pressure gauge, pressure setting, and an overpressure setting. The water in the boiler is heated
and changes phases to steam. The steam exits the top of the boiler and enters a pressure regulating
valve. This valve is adjusted to set the desired steam delivery pressure for each experiment.
Downstream of the regulating valve is a shutoff valve that is open during steam operation
and closed during nitrogen operation. Below this valve, there are three other valves that can be
opened/closed depending on if the system is using heated nitrogen for drying, or room temperature
nitrogen for filtration. Past this piping tee, there is a thermocouple. This is one of the five
thermocouples in the system that are used to monitor temperature at various points. These
thermocouples are wired into the datalogger to record temperature data versus time. Downstream
of the first thermocouple is the plate-and-tube 4-pass heat exchanger. The heat exchanger is
wrapped in an electrical heat trace heating tape. The heating tape has a thermostat control, and the
temperature of the heat exchanger is also monitored by thermocouple.
After exiting the heat exchanger there is another tee with a shutoff valve in either direction.
These valves can be opened/closed to allow the system to be used for drying, or for purging the
system to a waste outlet directed into the hood. If the system is operating in a drying setting, the
gas passes another thermocouple and then enters a flexible hose that is used to make a final
connection to the cylinder. The cylinder itself also has two thermocouples to monitor temperature
in different locations, as well as a pressure gauge. After the steam passes through the particle cake
at the bottom of the cylinder, it exits through a plastic tube and is directed into the fume hood. All
26 |
Virginia Tech | the “hot” components of the system are insulated. The insulation is shown by the green dashed
line. This is the system layout that was used for preliminary testing.
The next step was to fabricate the system as shown in the schematic. There are two steps
in the fabrication that did require assistance from a mining department team member with electrical
and plumbing experience. First, the 480v boiler needed to be wired and connected to high voltage
power. Second, there are two connections at the heat exchanger plate that required soldering. A
lead-free solder suitable for temperatures above 150 degrees C was chosen, and the piping was
connected by the department helper. The rest of the fabrication consisted of connecting threaded
pipes, installing insulation, and other things not requiring any professional help. System fabrication
is shown in Figure 9, Figure 10, Figure 11, and Figure 12.
Figure 9: Piping components prior to being insulated. Piping is shown leaving the boiler and passing
through the pressure regulating valve, shutoff valves, heat sink plate, and finally the flexible hose.
27 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.