University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Virginia Tech | Figure 4: User Input Form for SimuFloat. Input values are entered on the left
hand side, output is displayed on the right hand side.
Text input is limited to numerical values, including negative numbers. Input error
checking has been implemented to ensure that all required inputs have been entered, and that
values are within reasonable ranges for real-world flotation conditions, as model accuracy may
deteriorate with extreme values (e.g.- particle specific gravity must be greater than that of water,
otherwise particles naturally float).
SimuFloat determines recovery curves based on the model that is briefly described in
Section 1.3. Theoverall recovery for each size class of particles is determined and plotted both
linearly and logarithmically. The linear plot is included to illustrate the difficulty in floating
coarse particles that is experienced in industry. When utilizing the particle size distribution
feature, the program calculates the recovery for each size class and sums them up to obtain the
total recovery. The user input form forparticle size distribution is shown in Figure5.
18 |
Virginia Tech | Figure 5: Particle Size Distribution Input Form. Particle Sizes are displayed in
both mesh and microns. Input is entered on a percent passing basis
Particle sizes are shown in both mesh and micron sizes, and should beentered as the weight
percent passing the given size. The particle size distribution must be input if values for recovery
are desired, otherwise SimuFloat will simply output the recovery curve.
The results are output in the form ofplots foroverall recovery, grade, froth phase recovery,
flotation rate constant, and probabilities of collision, attachment, and detachment. The
independent variableis particle diameter in microns forall of the graphs. Graphs are viewed
through tab selection on the main form. In addition to graphical output, cell volumeand
calculated surface tension are output as text. Additionally, when using multiple feed
components, feed grade, product grade,and water, mass, product, mineral, middlings, and
gangue recoveries are output as text.
Single component input is straight forward; the user must enter all thevalues on the main
form. SimuFloat also allows for the input of a distribution of contact angles for a single
component feed based on particle size. This function can be used to simulate variance in
liberation characteristics due to particle size. As particle size increases, the number of locked
particles tends to increase. It has been shown that this tendency can have a significant effect on
19 |
Virginia Tech | Figure 10: Chalcopyrite Recovery, Decreased Specific Power. Input
Parameters: Power = 1.5 kW/m3, Gas Rate = 2 cm/s, S.G. = 4.1, θ =
60°, Frother = MIBC, Frother Concentration = 192 mg/m3, ζ -
potential = -15 mV, 4 cells, Retention time = 3 min, Froth Height
=10 cm. The blue line represents flotation at standard conditions
and the orange line represents flotation with the specific power
input decreased to 0.7 kW/m3. A reduction in specific power aids
coarse particle flotation, but harms fine particle flotation.
When the power input is reduced the kinetic energy of attachment, Eq. [17], is reduced, lowering
the probability of fine particles attaching to bubbles, Eq [10]. Fines recovery deteriorates
because the small particles no longer have the kinetic energy to rupture the wetting film of a
bubble, causing them to bounce off the bubble when a collision occurs. Inversely, coarse particle
recovery improves becausethe kinetic energy of detachment, Eq [23], is reduced at a lower
power input. This results in a lower probability of detachment, Eq [21].
The effect ofincreasing froth height is shown in Figure11:
25 |
Virginia Tech | Figure 11: Chalcopyrite Recovery, Increased Froth Height. Input Parameters:
Power = 1.5 kW/m3, Gas Rate = 2 cm/s, S.G. = 4.1, θ = 60°, Frother
= MIBC, Frother Concentration = 192 mg/m3, ζ -potential = -15 mV,
4 cells, Retention time = 3 min, Froth Height =10 cm. The blue line
represents standard conditions and the orange line represents
recovery with the froth height increased to 20 cm. Increasing froth
height harms coarse particle recoveries.
As froth height increases, bubble size in the froth becomes larger. This reduces the total surface
area of the bubbles, which reduces their particle carrying capacity. Thus, the larger and less
hydrophobic particles fall back to the pulp. This improves flotation selectivity, but lowers the
recovery, especially that of coarse particles. This effect on the overall recovery from the
flotation cell is not caused by any mechanism in the pulp phase; it is caused exclusivelybythe
froth phase.
The froth recovery curve for flotation at standard conditions, as well as a froth height of
20 cm is displayed in Figure12.
26 |
Virginia Tech | Figure 13: Chalcopyrite Recovery, Bubble Size Distribution. Input Parameters:
Power = 1.5 kW/m3, Gas Rate = 2 cm/s, S.G. = 4.1, θ = 60°, Frother
= MIBC, Frother Concentration = 192 mg/m3, ζ -potential = -15 mV,
4 cells, Retention time = 3 min, Froth Height =10 cm. The blue line
represents flotation at standard conditions, while the orange line
represents flotation using 6 bubble sizes. Coarse particle recovery
improves while fine and medium particle recovery deteriorates.
The bubble size calculated under standard conditions is 1.7 mm. The distributed bubble sizes
range from 1.25 to 2.5 mm in increments of 0.25 mm. These results are in line with expectations
from results found by Schubert. Larger bubbles are able to float coarser particles due to a higher
buoyant force(Schubert, 1999). The inverse is also observed in simulations and in practice;
small bubbles are better for flotation of fine particles.
Figure14 shows the effect of increasing the rate of air addition to the flotation cell.
28 |
Virginia Tech | Figure 15: Chalcopyrite Recovery, Increased Frother Concentration. Input
Parameters other = MIBC, Frother Concentration = 192 mg/m3, ζ -
potential: Power = 1.5 kW/m3, Gas Rate = 2 cm/s, S.G. = 4.1, θ =
60°, ζ -potential = -15 mV, 4 cells, Retention time = 3 min, Froth
Height =10 cm. The blue line represents flotation at standard
conditions, and the orange line represents flotation at a frother
concentration of 5000 mg/m3. The addition of too much surfactant is
detrimental to flotation performance.
An increase in surfactant proves to bedetrimental to flotation; the overall recovery for the
flotation bank drops by 10%. Galvin, Nicol, and Waters found that the addition of too much
surfactant becomes harmful to flotation, however a moderate concentration often aids in flotation
(1992). The negative effect on coarse particle flotation is seen because an increase in surfactant
concentration lowers the surface tension of the medium. This lowers the work of adhesion, Eq
[22], thus increasing the probability of detachment, Eq [21]. An increase in fine particle
recovery is observed because lowering the surface tension enables the creation of smaller
bubbles which aid in flotation of fines.
The advent of QEM*SEM enabled an easierdetermination of particle liberation
characteristics for laboratory flotation feeds. A QEM*SEM liberation data set given by
Sutherland is summarized in Table4 and Table5.
30 |
Virginia Tech | Table4:Liberation Data for a Batch Flotation Feed(Sutherland, 1989)
Size (microns) Wt. % Wt.% Cu by QEM*SEM
425 2.126 1.00
-425 300 0.768 0.43
-300 212 5.144 0.45
-212 150 12.807 0.90
-150 106 16.592 1.20
-106 75 10.49 1.67
-75 53 9.151 2.03
-53 38 5.937 8.46
-38 24 2.963 3.30
-24 17 5.398 2.13
-17 28.623 2.06
100.0
Note: Size functions less than 38 microns were separated using a Cyclosizer.
The sizes indicated here represent free chalcopyrite.
After flotation of the feed shown above, Sutherland found that the percentageof liberated
(90-100% liberation) chalcopyrite particles in the concentrate were as shown in Table5:
Table5:Liberation Data for Flotation Concentrate(Sutherland, 1989)
Size (microns) % Liberated Particles Average DOL%
150 54 70
-150 106 67 78
-106 75 76 84
-75 53 87 89
-53 38 90 91
-38 24 95 93
-24 17 96 93
-17 12 97 94
-12 97 94
The average DOLs were used to determine the average contact angle for each size class. These
contact angles were then used to simulate flotation. Figure16 shows the recovery of
chalcopyrite when using a distribution of contact angles.
31 |
Virginia Tech | Figure 17: Chalcopyrite Recovery, Effect of Liberation. Input parameters 15%
θ : 25°, 45% θ : 33°, 75% θ : 40°, 95% θ : 60°, Power = 2.5 kW/m3,
Gas rate = 2 cm/s, Froth height = 10 cm, Frother concentration =
192 mg/m3, ζ -poential = -15 mV, Cells = 4, and Retention time = 3
min. The left hand plot shows Sutherland’s results and the plot on
the right displays results from SimuFloat. The same general trend
is seen in both plots.
The operating parameters were not given for Sutherland’s plot, shown in Figure17, left
(Sutherland, 1989). Simulations, the results of which areshown in Figure 17, right, were run to
approximate Sutherland’s results ascloselyas possible. As degree of liberation increases, the
contact angle should increase, therefore the only variable between the “liberation” classes in
these simulations is the contact angle.
Next, simulations were performed to predict the performance of coal flotation under the
standard conditions. The overall recovery curve is shown in Figure18.
33 |
Virginia Tech | particles almost always attach to a bubble, but they have a very low probability of remaining
attached to the bubble.
As shown in Figure25, the particle ζ -potential has an effect on fine particle flotation.
Figure 25: Phosphate Recovery, Effect of ζ -Potential. Input Parameters: Power
= 1.5 kW/m3, Gas Rate = 2 cm/s, S.G. = 1.3, θ = 60°, Frother =
MIBC, Frother Concentration = 192 mg/m3, 4 cells, Retention time
= 3 min, Froth Height =10 cm. Fine particle flotation benefits as the
negativeζ -potential approaches zero.
Theζ -potentials of the plots from left to right are-0.009, -0.011, -0.013, and -0.015. As the
negativeζ -potential decreases, the energy barrier [11] increases, decreasing the probability of
attachment for small particles.
3.3 Multiple Component Feed
Input of multiple component feeds requires the use of the Feed Grade form, previously
shown in Figure7. Theinput parameters used in this simulation for multiple component feeds
are shown in Table6. The particle size distribution used roughly approximates a Gaudin-
Schuhmann distribution.
40 |
Virginia Tech | Table6:Multiple Component Feed Parameters
Mineral Middlings Gangue
Ore SG % Feed % Grade θ SG % Feed % Grade θ SG % Feed %Grade θ
Chalcopyrite 4.1 4 34 60 3.2 2 10 15 2.7 94 0 5
Coal 1.3 50 100 55 2.0 30 50 15 2.7 20 0 5
Phosphate 2.3 15 100 55 2.5 30 30 15 2.7 55 0 5
The values given for chalcopyrite simulate a flotation feed grade of 1.56% copper. The coal feed
is 35% ash and the phosphate feed is 24% grade.
The plots for multiple component feeds become difficult to read if more than one
simulation is shown on each. For this reason, each plot in this section will show a single
simulation. Figure26 shows the recovery curves for chalcopyrite using standard conditions and
the component parameters from Table6.
Figure 26: Chalcopyrite Recovery, 3 Component Feed. Input parameters shown
in Table 6. Each line represents a different component of the feed.
Recoveries vary due to differences in the contact angle and specific
gravity.
The red line represents the mineral, the blue line represents the middlings, and the tan line
represents gangue. Overall mass recovery to the product is 10.6%, copper recovery is 86.7%,
41 |
Virginia Tech | and the product grade is 12.8%. SimuFloat also reports a mineral recovery of 93.6%, a
middlings recovery of 39.7% and a gangue recovery of 6.4%.
It is widely reported that flotation selectivity is improved by increasing the froth height.
Figure27 shows the effects of changing the froth height on the three feed components.
Figure 27: Chalcopyrite, 3 Component Feed, Increased Froth Height. Input
parameters shown in Table 6. The increase in froth height from 10
cm to 20 cm lowered the overall copper recovery, but increased the
grade of the product.
Each of the colors represents the same feed stream as those in the previous figure. An increase
in the froth height causes each curve to shift down and to the left on the coarse end. Initially, at a
froth height of 10 cm, the product grade is 15.6% copper. After increasing the froth height to 20
cm, the product grade increases to 19.6%. The increased froth height reduces recovery by
entrainment [30]. This trend is supported by many researchers who found that increasing the
froth height provided better drainage of entrained particles and ofless hydrophobic coarse
particles (Ekmekci, Bradshaw, Allison, & Harris, 2003; Hanumanth & Williams, 1990).
While no model for cleaning stages in flotation yet exists in SimuFloat, cleaning may be
simulated by substituting the results back into the simulation. This facilitates the generation of
42 |
Virginia Tech | grade-recovery curves for the flotation bank. The recovery versus product gradeis plotted for
the values given in Table 6, as well as for contact angles reduced by 1/3, in Figure28.
Chalcopyrite Flotation: Recovery vs. Grade
100
80
%
y 60
r
ve No Midsθ: 60
o
c
e No Midsθ: 40
R
40 Low Midsθ: 60
Int Midsθ: 60
High Midsθ: 60
20
-
0 10 20 30 40
Grade %
Figure 28: Chalcopyrite Recovery vs. Grade. The solid line and dashed lines
represent feeds with no middlings at 60° and 40° contact angle,
respectively; all other simulations have a 60° contact angle. The
dash-dot line represents a low middlings feed, the dash-dot-dot line
represents an intermediate middlings feed and dotted line represents
a high middlings feed. As shown by the two no middlings feeds,
flotation at the lower contact angle produces a slight higher grade
product at the expense of copper recovery. Flotation performance
deteriorates as the concentration of non-liberated particles increases.
Two simulations were run with the same feed characteristics, but at different contact angles.
Threesimulations were run with a increasing concentrations of middlings in the feed. All five
simulated feeds contained 1.36% copper by weight. As expected, the simulation with alower
contact angleproduces a steeper grade-recovery curve. After the first flotation stage, copper
recovery for the 40° simulation is nearly 10% lower than the60° simulation, but it produces a
43 |
Virginia Tech | Experimental vs Simulated Flotation
100
90
80
S, 0.488
)
%
(
y S, 0.266
er 70
ov S, 0.080
c
e
R E, 0.488
60
E, 0.266
E, 0.080
50
40
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Time (min)
Figure33:Experimental vs. Simulated Silica Flotation Recovery. Input parameters
shown in Table 7. Black lines and markers represent a specific power
input of 0.488 kW/m3,red lines and markers represent a specific power
input of 0.266 kW/m3, and blue lines and markers represent a specific
power input of 0.080 kW/m3. For the simulation of silica flotation,
recoveries matched well with experimental results.
The markers represent experimental results, the lines represent simulated results, and the
quantities in the legend correspond to thespecific power input. This figure makes it clear
SimuFloat has predictive capabilities. The shapes of the simulated curves closelyapproximate
those forthe flotation recoveries obtained in the lab.
3.5 Conclusion
Froth flotation simulations have been performed using a predictive model derived from first
principles. Unlike many of the current flotation models, the model used in SimuFloat does not
require the input of a floatability constant determined from plant data. This gives the simulator
predictive capabilities without the need for extensive in-plant flotation studies. Detailed
simulations were run for chalcopyrite, coal, and phosphate. These simulations show the effect of
49 |
Virginia Tech | 4 Conclusion
4.1 General Conclusion
Modelingof flotation is a vital task for improving the flotation process. It allows the
researcher to learn much about the mechanics of a flotation cell without the cost or time
requirements of lab or pilot scale testing. The flotation simulator developed in the present work
is a useful tool for simulating flotation, while accounting forboth hydrodynamics and surface
chemistry. SimuFloat was found to berelativelyaccurate at predicting flotation under a variety
of conditions, and has been validated through comparison with experimental data.
4.2 Recommendations for Future Work
While SimuFloat marks a step forward in the process of developing a comprehensive
flotation simulator, it is not complete. The following are areas of the simulator that could be
improved through further research.
1. Introduceuser defined, integrated flow sheets that may be solved usingmass balances.
2. Includea relationship between concentration and contact angle for collectors used in
flotation. This would make SimuFloat more industry friendly, as contact angle is
commonlynot measured in flotation practice.
3. Incorporate a relationship between ζ -potential and pH. ThepH is not the only factor
that affects ζ -potential. Like the contact angle, ζ -potential is not generally measured in
the field. Replacement ofζ -potential with pH as a simulation input wouldmake the
program more industry friendly.
4. Small interface tweaks, such as the ability to input parameters in different units, and the
ability to retain input values upon closing the program would make SimuFloat more
user friendly.
5. Account for the effects ofhydrophobic coagulation. This will improve the observed
recovery of fine particles and bring the simulation predictions more in line with results
observed in flotation practice.
6. Develop an equation relating air holdup, bubble size, and gas rate. In actual flotation
systems, these three variables are interdependent.
7. Include a model to calculate contact angle based on liberation class.
51 |
Virginia Tech | Analytical and Numerical Techniques for the Optimal Design of Mineral
Separation Circuits
Christopher Aaron Noble
(ABSTRACT)
The design of mineral processing circuits is a complex, open-ended process. While
several tools and methodologies are available, extensive data collection accompanied with
trial-and-error simulation are often the predominant technical measures utilized throughout
theprocess. Unfortunately, thisapproachoftenproducessub-optimalsolutions, whilesquan-
dering time and financial resources. This work proposes several new and refined method-
ologies intended to assist during all stages of circuit design. First, an algorithm has been
developed to automatically determine circuit analytical solutions from a user-defined circuit
configuration. This analytical solution may then be used to rank circuits by traditional
derivative-based linear circuit analysis or one of several newly proposed objective functions,
including a yield indicator (the yield score) or a value-based indicator (the moment of iner-
tia). Second, this work presents a four-reactor flotation model which considers both process
kinetics and machine carrying capacity. The simulator is suitable for scaling laboratory data
to predict full-scale performance. By first using circuit analysis to reduce the number of
design alternatives, experimental and simulation efforts may be focused to those configu-
rations which have the best likelihood of enhanced performance while meeting secondary
process objectives. Finally, this work verifies the circuit analysis methodology through a vir-
tualexperimentalanalysisof17circuitconfigurations. Ahypotheticalelectrostaticseparator
was implemented into a dynamic physics-based discrete element modeling environment. The
virtual experiment was used to quantify the selectivity of each circuit configuration, and the
final results validate the initial circuit analysis projections.
Parts of this work received financial support form FLSmidth Minerals. Unless other-
wise indicated, all examples presented in this document are fictitious and only intended for
demonstration. Any resemblance to real operations is purely coincidental. |
Virginia Tech | Acknowledgments
The preparation of this dissertation has been an immensely rewarding undertaking. I
would like to first thank the Lord for the many blessings I have experienced.
I want to acknowledge my research advisor, Dr. Jerry Luttrell. I know I could not
have started this work without his teachings, and I know I could not have finished this work
without his persistence. Though I have occasionally heard “you can’t beat physics” in my
sleep, I slept knowing that someone besides me wanted to see this work to completion. Jerry
has been a constant friend and mentor throughout my time at Virginia Tech.
I cannot understate the role of my other committee members and mentors in motivating
me to conduct this research. I owe original my interest in flotation to Dr. Roe-Hoan Yoon.
His unquenchable thirst for understanding is both a silent and, at times, vocal motivator for
continued success. Dr. Greg Adel, has provided solidarity and direction, while Dr. Emily
Sarver has consistently offered pragmatic suggestions and advice on many professional levels.
Finally, I thank Dr. Serhat Keles for the original genesis of much of this work. I am not
sure if the graphical interface would have ever been attempted had he not invested countless
hours in the beginning.
I also express gratitude to FLSmidth for the continued funding throughout parts of this
project. I especially thank Asa Weber for his role facilitating and testing my ideas. His
suggestions have taught me a lot about flotation as well as practicality and leadership.
I want to thank my current, former, and future students. They all motivate me every
day to dig deeper, work harder, and discover more. I have learned a lot of patience serving
them, and I hope I have repaid a fraction of the enlightenment and enjoyment that they
have brought me.
I could not have completed this work without the constant love and support from my
friends and family. I thank them for bearing with me throughout this process. Finally, I
thank my yeojachingu Alice Lee . She is a constant source of love, hope, and joy.
iii |
Virginia Tech | Chapter 1
Introduction
1.1 Preface
Mineral processing is largely the science of particulate separation as it applies to the
beneficiationofminingproducts. Run-of-minematerialconsistsofoneormorevaluablecom-
ponents, designated as ore minerals, mixed with a significant portion of waste components,
designated gangue minerals. The relatively low quality of the run-of-mine material often
necessitates downstream processing to enhance the marketable value of the raw material.
While the general quality of the final product may be defined by several indicators (average
particle size, moisture content, bulk mechanical properties), the compositional purity (i.e.,
grade) of the final product often drives the economic unit value. Consequently, the most
important objective of mineral processing is to physically separate the mineral constituents,
so that the valuable portions may be retained for marketing or further processing, while the
gangue may be properly disposed.
Mineral beneficiation is a costly portion of the raw material production chain, given
the large overall throughputs required to recover material from low-grade deposits. As
a result, the separation processes must increase the value of the final product to a level
which justifies the cost of beneficiation. Since single unit operations are often incapable of
producing sufficient separation, multiple cleaning stages are typically arranged in a circuit
to produce synergistic efficiencies. The simple serial arrangement and interconnections of
the circuit have the capacity to drastically alter the single-stage separation efficiency. Well-
designed circuits can overcome various unit inadequacies, while poor configurations can
actually degrade performance below that of a single unit.
1 |
Virginia Tech | CHAPTER 1. INTRODUCTION
In theory, a perfect separation can ultimately be achieved by a well-designed circuit
of imperfect units, regardless of the magnitude of deficiency in the single unit. In practice,
these ideal circuits are never fully pursued, since the cost of the required resources would
greatly overcome the value of the pure separation products. Nevertheless, the optimal design
of separation circuits is critical to maximizing beneficiation value, while minimizing required
capital resources and processing costs.
This work is largely concerned with the identification of optimal circuit designs. While
no single circuit is universally suitable in all instances, analytical and numerical tools can
be used to guide the decisions of circuit designers as the site-specific, ore-specific, and time-
specific conditions dictate. The resulting techniques are empowered by fundamental insight
and streamline the otherwise haphazard and costly circuit design process.
1.2 History
Over the last 100 years, mineral processing has advanced from a crude, labor-intensive
processes to a highly sophisticated scientific endeavor. While much of the progress has been
spearheaded by the invention and development of froth flotation, other processing methods
havealsobenefitedfromthemorescientificoutlookonmineralseparation(Wills&Atkinson,
1991). One of the first exhaustive analyses of the design and operation of mineral processing
plants was presented by Taggart (1927). This classic text marked the beginnings of the
burgeoning scientific discipline.
Throughout the remainder of the century, the science of mineral processing grew to en-
compassnumeroussub-disciplines, includingsurfacechemistry, analyticalchemistry, physical
chemistry, mathematical modeling, data analysis, scientific computing, simulation, engineer-
ing economics, process control, fluid mechanics, machine design, and extractive metallurgy.
Both the ever-increasing consumer demand for minerals as well as the heightened productiv-
ity of various separation processes are evident when considering the rapidly increasing global
mineral production. Figure 1.1 shows global production statistics for 47 major mineral com-
modities (Kelly et al., 2010). While the production of some commodities has stagnated in
recent years (e.g. lead, mercury, and tin), others have continually experienced long-term
exponential growth since the beginning of the century (e.g. aluminum, copper, and rare
earths).
Froth flotation is the most common and versatile separation methodology used in the
mineralprocessingindustrytoday. Sinceitsdevelopmentintheearly1900’s(Sulman,Picard,
2 |
Virginia Tech | CHAPTER 1. INTRODUCTION
x 107 x 106 x 106 x 105
4 10 500 10000 5 10
2 5 5000 5
0 0 0 0 0 0
1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000
ALUMINUM BARITE BERYLLIUM BISMUTH BORON BROMINE
x 104 x 106 x 104 x 107 x 107
4 10 10 2 1000 4
2 5 5 1 500 2
0 0 0 0 0 0
1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000
CADMIUM CHROMIUM COBALT COPPER INDUSTRIAL DIAMOND FELDSPAR
x 106 x 106 x 108
10 200 200 4000 2 2
5 100 100 2000 1 1
0 0 0 0 0 0
1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000
FLUORSPAR GALLIUM GERMANIUM GOLD NATURAL GRAPHITE GYPSUM
x 104 x 109 x 105 x 106 x 105 x 104
4 4 5 4 10 2
2 2 2 5 1
0 0 0 0 0 0
1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000
IODINE IRON ORE KYANITE LEAD MAGNESIUM METAL MERCURY
x 105 x 104 x 105 x 106 x 108
5 10 4 2 2 1000
5 2 1 1 500
0 0 0 0 0 0
1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000
MICA, FLAKE MICA, SHEET MOLYBDENUM NICKEL PHOSPHATE ROCK PLATINUM−GROUP
x 107 x 105 x 108 x 109
4 2 100 4 10 4000
2 1 50 2 5 2000
0 0 0 0 0 0
1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000
POTASH RARE EARTHS RHENIUM SALT SAND SELENIUM
x 104 x 105 x 107 x 105 x 106 x 104
4 10 2 4 10 10
2 5 1 2 5 5
0 0 0 0 0 0
1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000
SILVER STRONTIUM TALC TIN TITANIUM TUNGSTEN
x 104 x 105 x 105 x 107 x 106
10 10 10 2 2
5 5 5 1 1
0 0 0 0 0
1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000 1900 1950 2000
VANADIUM VERMICULITE WOLLASTONITE ZINC ZIRCONIUM
Figure 1.1: World production (shown in metric tonnes) for various major minerals from 1900
to 2009. Data after (Kelly et al., 2010).
3 |
Virginia Tech | CHAPTER 1. INTRODUCTION
& Ballot, 1905), most of the advancement in the mineral processing science have been driven
by the dominance of the flotation process. Up until 1905, most base-metals and porphyry
copper deposits were processed via simple gravity separation. Around this time, the poor
separation performance and the ever-increasing ore complexity led to substantial milling
deficiencies and lost revenue (Lynch, Watt, Finch, & Harbort, 2007). A large-capacity,
highly selective industrial process was needed to ensure the economic stability of the world-
wide base metal industry. Shortly after its inception, the froth flotation process fulfilled
this role and quickly grew to one of the most crucial metallurgical processes . With the
development of selective reagents in the 1930’s, processing plants were beginning to use
froth flotation as the sole separation process (Wills & Atkinson, 1991). While observing
those industrial advancements, many academic researchers and engineers became curious
on how to optimize flotation performance through fundamental understanding and rigorous
laboratory experimentation. This initial growth period witnessed the prominence of authors
such as Sutherland (1948), Gaudin (1957), and Harris (1976) whose work still withstands
scrutiny today.
Givenitsprominenceintheeconomicproductionofbasemetals, frothflotationhasbeen
describedbyseveralseveralauthorsasoneofthemostsignificanttechnologicalinnovationsof
the 20th century (Klassen & Mokrousov, 1963; Napier-Munn, 1997; Fuerstenau, 1999; Lynch
et al., 2007). Even outside of the minerals industry, flotation has alternatively been used as
a separation process in waste water treatment (Wang, Fahey, & Wu, 2005), algae harvesting
(Phoochinda, White, & Briscoe, 2004; Lynch et al., 2007), and paper recycling (Bloom
& Heindel, 1997; Kemper, 1999; Gomez, Watson, & Finch, 1995). The ever-increasing
importance of froth flotation as an industry-leading separation technique is evident in the
exponential growth of the size of flotation units (Figure 1.2). Since the commercialization of
the process in the early 1900’s, the size of “large” flotation cells as reported in the literature
has consistently followed an exponential curve, doubling in size every nine years.
Other separation methods have witnesses comparatively modest gains in prominence
throughout the last century. These methods are often relegated to the few mineral industries
where their simplicity and utility overcome their lack of robustness. For example, modern
coalpreparationplantslargelyemploygravityseparation, givenitseffectivenessinseparating
simple coal and rock systems, especially in the larger size fractions. At least one author
has claimed that gravity separation is realizing a small revival in base metal plants, where
conditions favor their simple process control strategies (Wills & Napier-Munn, 2006, p.
225). Other separations, such as electromagnetic or magnetic are selected when the physical
properties of the mineral systems allow their usage. Nevertheless, the gains in process
knowledge originally driven by flotation have effectively benefited these industries, especially
4 |
Virginia Tech | CHAPTER 1. INTRODUCTION
in the areas of process control, modeling and simulation, and circuit design.
1.3 Unit Operations
Mineral processing consists of two fundamental unit operations: comminution and sep-
aration. Comminution processes reduce the size of run-of-mine material prior to downstream
separation processes. This size reduction step is usually required to liberate locked mineral
particles; although, comminution may also perform a number of auxiliary functions, includ-
ing enhancing mineral handleability, creating fresh surfaces, increasing surface area, and
managing particle size.
From a purist perspective, the comminution process begins during the mining phase
(i.e., drilling and blasting) and continues throughout the beneficiation phases. Size reduction
in the comminution stage-proper is often achieved by both crushing and grinding, which
may include dry and wet methods; however, unintentional size reduction and attrition may
result from other downstream materials handling operations, such as pumping, tank mixing,
and ore storage. Lynch and Rowland (2005) have provided a narrative on the historical
influencesofcontemporarygrindingmethods. Otherauthorshaveprovidedtechnicalreviews
and critical analyses of comminution theory, modeling, and equipment design (Bond, 1952;
Lynch & Bush, 1977; Veasey & Wills, 1991).
After the comminution stage, the liberated material is concentrated via one or more
physicalseparationprocessesuntilthemineralcomponentmeetstherequiredproductquality
specifications. Separation processes are often broadly classified by the chemical phase of the
constituent products. Under this taxonomy, the following designations are given:
• Solid-Solid Separation: Processes which separate minerals of two different composi-
tions, namely mineral and gangue components. These operations can be conducted
wet or dry, depending on the specific application. Common examples include froth
flotation, gravity separators, and magnetic separators.
• Solid-Liquid Separation: Drying processes which concentrate the solid phase of the
mineralslurryorreducethemoistureoftheproduct. Theseunitoperationsarerequired
when the final product moisture is of concern, such as in coal preparation. Common
examples of solid-liquid separation include thickeners, centrifuges, and thermal dryers.
• Size-Size Separation: Processes which classify minerals based on particle size. These
unit operations are typically used to ensure that appropriate size reduction has been
6 |
Virginia Tech | CHAPTER 1. INTRODUCTION
achieved in the comminution stage. Additionally, size-size separation may be utilized
when seeking to exploit the size dependency of many solid separation processes. Two
common examples include screens and cyclones.
Separationsinvolvingtwoliquidphasesorgaseousphasesarenottypicallyconsideredin
themineralprocessingdiscipline. Theseprocessesaremorecommontochemicalengineering,
particularly the studies of solvent extraction, adsorption, and distillation. A pragmatic
review of various separation methods, including gas-gas separation, gas-solid separation,
gas-liquid separation, and liquid-liquid separation has been presented by Schweitzer et al.
(1979). Despite the differences in unit operations, many of the performance indicators and
circuit design strategies for these techniques are founded in similar fundamental theory.
The focal point of most mineral separation plants is the solid-solid separation stage.
Theseunitoperationsaresolelyresponsibleforproducingafinalproductfreeofcontainments
and of sufficient marketable concentration. As a result, the economic gains and losses of the
entire plant are highly sensitive to the efficiency of these processes. The selection of the
appropriate solid-solid separator is driven by contrasts in the physical or chemical properties
ofthemineralandgangueconstituents. Thefollowingdesignationssubdividetheseprocesses
bythepropertyonwhichtheseparationisbased(modifiedafter,Wills&Napier-Munn,2006,
pp. 8 - 11):
• Gravity-Based Separation: Processes which separate minerals on basis of particle den-
sity. Feed particles are typically fluidized by air, water, or a heavy medium. The
application of a centrifugal force is used to enhance the rate of separation. Common
examples include cyclones, spirals, and dense-media vessels.
• Surface-Based Separation: Process which exploit contrasts in surface properties, such
as hydrophobicity. Froth flotation is the most prominent example.
• Conductivity/Magnetic Separation: Processes which exploit the degree of a particle’s
conductivity or magnetic susceptibility. Common examples include high-intensity and
low-intensity magnetic separators, high-tension separators, and matrix magnets.
• Optical and Other Novel Separation: Processes which can exploit any other property
disparate between the valuable mineral and gangue material. One such example is a
diamond ore sorter which uses X-ray diffraction to distinguish liberated diamonds from
the host rock.
7 |
Virginia Tech | CHAPTER 1. INTRODUCTION
The efficiency of most separation processes is strongly influenced by the particle size of
the feed material. Every unit operation performs optimally within a critical size range, and
many processes cannot feasibly distinguish particles of extreme sizes. In most cases, these
performance limitations are driven by the physical subprocesses that define the individual
unit operations. For example, many gravity separators exploit the differences in the settling
velocity of particles suspended in water. This settling velocity is a function of both density
and particle size. As particles settle, those in a similar size range may be distinguished by
differences in density; however, as the size range expands, the separation is influenced by
both density and size. As a result, many gravity separations cannot distinguish a small,
dense particle from a large, light particle. By expanding this example to include other
separation methods, an effective particle size range may be determined for various separators
by recognizing the mechanism by which particles are distinguished. Figure 1.3 shows various
unit operations and their range of applicable particle sizes.
In addition to particle size and other physical limitations, all particulate separation
processes are inherently probabilistic and subject to unavoidable imperfection. To overcome
these inefficiencies, mineral processing plants typically include staged separation arrange-
ments, where the products of a single unit may be further processed by other units or
reintroduced at other points in the plant. The resulting structure which includes all of the
specific unit operations and the flow patterns of the units’ products is defined as the sepa-
ration circuit. Over time, the mineral processing industry has trended toward a few basic
circuit configurations which are adapted to account for site-specific considerations.
The fundamental element of a separation circuit is a unit. In a binary system, a sep-
aration unit (Figure 1.4a) is capable of accepting a single feed stream while producing two
product streams, namely a concentrate and a tailings. A junction unit (Figure 1.4b) is ca-
pable of accepting two feed streams while producing a single product stream. Practically, a
separation circuit may be a single flotation cell or any other unit operation, while a junction
may be a sump or mixing tank (Meloy, 1983). A bank of units (Figure 1.4c) consists of two
or more individual units which are serially staged such that the tailings product passes from
one unit to the next. The concentrate product of each unit in the bank is typically combined
to produce a single bank concentrate. Banks of flotation cells are common, as the recovery
from a single unit is not substantial to justify standalone cells. Flotation banks range from
5 to 12 units, depending on the unit volume and the process requirements (Malghan, 1986).
Industrial trends have recently favored larger cells with fewer cells in a bank, though the
metallurgical and economic performance of such trending is debated (Harris, 1976; Abu-Ali
& Sabour, 2003). Banks of individual units may then be configured to produce the overall
circuit.
8 |
Virginia Tech | CHAPTER 1. INTRODUCTION
Separation circuits are classified as open or recycle, depending on the presence of cir-
culating loads (open circuits do not incorporate recirculating loads). The relative location
of the bank within the circuit provides a means of designation (Williams & Meloy, 1989).
Rougher banks are the initial separation which receives the plant feed. The rougher con-
centrate product is advanced to the cleaner bank which further upgrades the product until
the final quality specifications are met. Finally, the rougher tailings product is sent to the
scavenger to ensure that no valuable material has bypassed the rougher stage (Malghan,
1986). These definitions are illustrated in Figure 1.5.
In this work, circuit design encompasses all of the design decisions associated with the
steady-state operation of separation circuits, whether the circuit under consideration is a
greenfield design or a modification to an existing plant. Within this definition, the circuit
designer must address several questions:
1. The selection of the appropriate separation process(es). While this selection is fairly
definitive for a given mineral system, some flexible may be warranted in novel sepa-
ration systems or where the economics support non-traditional processes, such as the
choice to include or omit flotation as part of a fine coal cleaning circuit. Furthermore,
specific equipment types should be considered in this decision, such as column versus
conventional flotation cells.
2. The selection of the number and size of each unit in a bank. Especially in the case of
rate separators (See Section 2.2.3), the separation performance of each unit is depen-
dent on the mean residence time of particles in the vessel. Consequently, units must
be sized to ensure sufficient residence time.
3. The optimization of the operational parameters unique to each unit. The steady-state
performance of all separation units can be influenced by specific operational parame-
ters. While dynamic control systems can alter these values to adapt to changing feed
conditions, an ideal steady-state value should be determined by the circuit designer.
This optimization may include reagent dosages for flotation plants or dense-media
concentrations values in dense-media circuits.
4. The configuration of the flows between individual units and banks. This decision
includes the required number of scavenger and cleaner banks, open or recycle circuit
designation, and the point of reentry for recirculating loads.
While these design considerations have been presented sequentially, the actual circuit
design process must consider all of these factors simultaneously while incorporated knowl-
11 |
Virginia Tech | CHAPTER 1. INTRODUCTION
edge gleaned from laboratory and pilot-scale experiments, process models, dynamics and
control systems, common sense limitations, empirical insight, and operator preferences and
biases. Given the complexity and interdependence of this knowledge base, circuit design
unfortunately remains cumbersome and unsystematic. Numerous methods and engineering
tools have been developed to assist the circuit designer; however, no comprehensive design
methodologies have gained substantial usage in an industrial setting (Lucay, Mellado, Cis-
ternas, & Galvez, 2012).
1.4 Objectives
Thesingulargoalofthisresearchistodevelopandvalidateamethodologyforseparation
circuit design based on new, existing, or refined analytical and numerical techniques. This
methodology should streamline the circuit design process, by assimilating diverse process
knowledge and fundamental scientific observations. The resultant tool-set should foster
optimaldesignstrategiesthroughouttheentirecircuitdesignprocess,fromtheinitialconcept
generation to the final performance guarantee.
In summary, the itemized objectives of this study are to:
• Conduct a critical review of the recent developments in separation circuit design.
• Develop and asses a software platform for froth flotation circuit simulation. This
simulator may be later used in the corroboration of novel circuit design methodologies.
• Develop an analytical methodology of circuit evaluation which rely on fundamental
separation principles.
• Implement that methodology into a design software package which can streamline
preliminary analysis and alternative selection for proposed circuit configurations.
• Experimentally validate the circuit design methodology with a known or novel separa-
tion process.
1.5 Organization
The body of this dissertation is organized into nine chapters, with the primary works
presented individually as standalone papers describing a separate phase or objective of the
13 |
Virginia Tech | CHAPTER 1. INTRODUCTION
work. Theseprimaryphasesconstitutetheseveninformativechapters, whileanintroductory
and a concluding chapter complete the dissertation. References are listed individually for
each chapter.
Chapter 1 includes a description of the historical context of separation circuit design,
general definitions, and an overview of the work completed as a part of this study.
Chapter 2 provides a comprehensive review of the state-of-the-art in engineering data
analysis as it applies to mineral processing, process modeling, circuit simulation, and circuit
optimization. This chapter shows the historic trends and recent developments in circuit
design strategies. This review is largely reflective and descriptive; however, some meta-
analysis is used to critically evaluate prior claims and methods.
Chapter 3 describes the development of a robust, graphically based simulator for froth
flotation circuits, FLoatSim. A four-reactor flotation model, which is based on standard
first-order rate equations, is described along with details of the simulation approach and
software interface. Finally, this chapter presents a case study which utilizes the software in
a coal flotation scale-up problem.
Chapter 4 presents a critical evaluation of rate-based simulation from the perspective
of discretization detail. This chapters shows the derivation of “rate compositing formulas”
unique for each reactor type. These formulas are used to calculate a single “apparent rate”
value which produces the same recovery as a series of distributed rates at a given residence
time. The utilization of these formulas is demonstrated by the error propagation which
resultsfromtruncatingtheratedistribution. Samplecalculationsandexamplesarepresented
in this chapter.
Chapter 5 introduces the use of analytical circuit solutions in the design of optimal sep-
aration circuits. This chapter describes Meloy’s (1983) algebraic method of analytical circuit
solution determination, while noting the drawbacks and inefficiencies of the method. In light
of the deficiencies, a new method for analytical circuit solution determination is introduced.
The final algorithm is described and applied to evaluate several circuit configurations found
in the literature.
Chapter 6 extends the utility of analytical circuit solutions, by describing the resul-
tant optimization software: the Circuit Analysis Reduction Tool (CARTTM). The program
uses analytical circuit solutions and the circuit partition sharpness to evaluate circuit con-
figurations. The software also contains a custom algorithm which determines the optimal
location in the circuit for an additional unit based on the greatest increase in the sharpness
parameter. These tools and other applications of the software are presented.
14 |
Virginia Tech | Chapter 2
Literature Review
(ABSTRACT)
Today, the process of designing circuits is largely driven by computer simulation. Simu-
lations require extensive data defining the feed and unit operations, as well as process models
which can relate these parameters to the separation performance. The circuit designer is
then tasked with the selection of the separation units and their interconnections in a way
that pursues a technical or economic objective. The task of optimizing these circuits has
grown with the use of simulation. Several modern circuit optimization routines incorpo-
rate sophisticated nonlinear integer programming and genetic algorithms. Unfortunately,
most industrial circuit designs do not use these methods, instead relying on trial-and-error
simulation. This approach incorporates empirically-based heuristics and ultimately leads to
non-ideal configurations requiring perpetual modification and redesign. This paper reviews
the engineering tools, modeling paradigms, and optimization routines which encompass the
circuit design problem.
2.1 Data Analysis
Data utilization and simulation are the most prominent engineering tools available to
circuitdesigner. Bothgreenfielddesignsandplantmodificationstypicallybeginbygathering
laboratory or plant data in order to develop benchmarks for current performance as well
as prediction for the anticipated results. This data may also be used to build models or
estimate the processing requirements for a given ore. While the modeling and simulation
stages are of paramount importance in this approach (see Section 2.2), the role of data
18 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
Table 2.1: Summary of Common Metallurgical Performance Indicators
Name Symbol Explanation
Mass Flow F, C, T Amount of total mass in a given stream
Grade f, c, t Quality of given stream; mass of desig-
nated material component (%)
Yield Y = C/F Total amount of material which was pro-
duced as concentrate (%)
Recovery R = Cc/Ff Amountofdesiredmaterialwhichwaspro-
duced as concentrate (%)
Rejection J = Tt/Ff Amountofdesiredmaterialwhichwaspro-
duced as tailings (%)
Separation SE = R −R Amount of material that experienced ideal
valuable waste
Efficiency separation (%)
Note: F, C, and T refer to the feed, concentrate, and tailings streams, respectively.
acquisition, parameter estimation, and performance measurement cannot be understated.
Not only are many process models limited by the veracity of the data used to build them,
but routine plant evaluation relies on sound sampling and analytical principles (Wills &
Napier-Munn, 2006). Errors at this stage may mask true performance levels and propagate
misinformationthroughouttheentirecircuitdesignprocess. Asaresult, standardprocedures
for material sampling, laboratory testing, and performance evaluation have been developed
and are presented in this section.
2.1.1 Performance Indicators
Several common and widely accepted metallurgical performance indicators are used
to evaluate the separation capacity of individual unit operations and entire circuits. While
thesecalculationsarewellknown, thedefinitionsareincludedhereforbothcompletenessand
precision. Certain performance indicators are more common to specific mineral industries,
and colloquial terms may be used in place of (or in distortion of) the precise terms listed
here. Table 2.1 details several common metallurgical performance indicators.
Some interdependence exists between the performance indicators listed in Table 2.1.
For example, real processes experience a trade-off between grade and recovery. Evaluation
19 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
of mineral separation systems is usually conducted by comparing the grade-recovery curves
for different process designs. Many researches have attempted to produce a single indicator
which combines grade and recovery. The most accepted single index is the separation ef-
ficiency (SE) which theoretically indicates the percentage of feed which passes through an
ideal separation (Schulz, 1970).
While separation efficiency and other standards indicate the metallurgical performance,
they do no reveal any information on the economic performance. Conversely, the most
common economic measure in the metal industry is the Net Smelter Return (NSR). This
value is found by subtracting the smelter charges, penalties, and transport costs from the
payment for the delivered metal. This value fluctuates with concentrate grade, though an
optimum value is usually obtainable within the technical limitations of the system (Sosa-
Blanco, Hodouin, Bazin, Lara-Valenzuela, & Salazar, 2000; Wills & Napier-Munn, 2006)
Giventhecomplexityofmostmineralseparationplants, thegenerictermsgiveninTable
2.1 usually provide a sufficient starting point for the evaluation of metallurgical performance.
Alternatively, coalpreparationresearchershavedevelopedanumberofplant-wideseparation
efficiency indicators, largely driven by the standard modes of laboratory evaluation in coal
washing. Throughout the coal preparation plant, gravity techniques are predominantly used
to separate the binary coal-ash mixtures. A washability (or float-sink) test is a standard
laboratory procedure used to identify the relative density fractions of the feed coal (Osborne,
1988a; Leonard, 1991). Thistesteffectivelyidentifiestheidealseparationpotentialatvarious
density cut-points. By comparing the actual separation performance of the plant to the ideal
separation determined from washability, several practical performance indicators may be
determined. Forexample,theorganicefficiencyisdefinedasthepercentageratiobetweenthe
plantyieldandthetheoreticalyielddeterminedattheactualashcontent(withthetheoretical
values determined from washability testing). Similarly, the ash error is the percentage ratio
between the ash content of the actual clean coal product and the theoretical ash content
at the same yield. The International Standards Organization suggests that any statement
describing the performance of a coal preparation plant should include these two indicators
along with percentage of misplaced material in various size fractions and the total percentage
of correctly placed material (Osborne, 1988b; Leonard, 1991).
Since washability analysis only applies to gravity separators and since flotation has
become prominent in many modern preparation plants, researches have attempted to derive
testing methods which identify the ideal flotation partition. The release analysis is one such
method which utilizes successive batch flotation tests where the concentrate is re-floated
multiple times. This procedure attempts to minimize entrained particles while forming an
20 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
ideal grade-recovery curve (Dell, 1964) Modifications to the original testing procedure have
been introduced in order to minimize operator bias and increase testing ease (Honaker, 1996;
Pratten, Bensley, & Nicol, 1989). While the release analysis has gained substantial backing
in the flotation industry (especially in coal preparation), some criticism has undermined the
theoretical backing of the technique (Meloy, Whaley, & Williams, 1998). Here, authors argue
thatthegrade-recoveryboundaryisnotuniquetoagivenmineralsystembutisdependenton
various operational characteristics of the release analysis (i.e. the type of cell, the operator’s
experience, and the levels of the analysis). The authors support these claims through an
analytical evaluation of the of the possible outcomes of different test methods.
2.1.2 Material Sampling and Data Reconciliation
Most metallurgical decisions rely on the ability to gather mineral samples which are
later subjected to further analysis. The downstream uses of these samples rarely consider
the means in which they were retrieved, and non-representative samples often lead to errant
decisions and wasted resources. The challenges of performing unbiased sampling of hetero-
geneous mineral systems has been well studied (Gy, 1979, 1992). While the mathematical
approach of Gy is quite involved, the author provides practical, yet theoretically-supported,
standards for material sampling. Most of the work is based on the probabilistic quantifica-
tion of sampling errors and ways to minimize these errors during sampling processes. One
general rule is that sampling should be probabilistic rather than deterministic: all particles
in a given lot should have an equal probability of being sampled. In flowing streams, this
rule is usually satisfied incrementally: either a portion of the stream is sampled for a long
time or all of the stream is sampled for a short time. In general, the latter approach pro-
duces more reliable samples since mineral processing streams are often subjected to particle
classification (e.g. heavy solids settle to the bottom of a horizontal pipe).
Datacollectedfromexperimentalstudiescanbesomewhatunreliable, evenwhenproper
sampling procedures are followed. Given the stochastic nature of mineral feed streams and
separation processes, individual samples are subject to marginal discrepancies. When redun-
dant data is collected, the assays must be reconciled prior to further analysis. One common
example of “redundant data” collection is fulfilled by sampling the feed and products for a
given unit. Since the feed assay can be back-calculated from the products, the feed assay, in
this case, is said to be redundant. While ignoring, omitting, or avoiding redundant data is
common, such actions represent poor uses of the collected information. Instead a standard
data reconciliation method must be instituted to ensure that the final data set adheres to
the conservation of mass principle. By definition, a steady-state process does experience
21 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
¨
Table 2.2: List of Error Distribution Functions used in Data Reconciliation. After (Ozyurt
& Pike, 2004)
Name Equation Sensitivity to Gross
Errors
Gaussian (cid:80) e2 High
Fair c2[|e|/c−log(1+|e|)/c)] Moderate
Lorentzian 1/(1+e2/2) Very low
Tjoa-Biegler −log((1 − η) ∗ exp(−e2/2) + η/b ∗ Low
√
exp(−e2/(2∗b)))+log( 2∗π ∗σ)
Legend: e = (measured - adjusted)/σ σ = tolerance
η = probability of gross error c = tuning parameter, between 10 - 20
b = ratio of large variance of gross error with respect to normal error
accumulation, and thus, the component mass of the products must equal the component
mass of the feed. In the mineral processing discipline, the adjustment of data to meet this
principle is deemed mass balancing (Luttrell, 1996).
Onecommonwaytomassbalancedataistominimizethedifferencebetweentheexperi-
mentaldataandtheadjusteddatawhileconstrainingtheadjusteddatatothemass-balanced
condition(Reklaitis&Schneider,1983; Luttrell,1996; Wills&Napier-Munn,2006). Thislin-
ear optimization problem may be solved by one of several optimization routines (See Section
2.1.4). The objective function, representing the error between the adjusted and measured
pointscanbedeterminedbyoneofseveralmeans, dependingonthedesiredinfluenceofgross
error. Four common error distribution functions are shown in Table 2.2 (Tjoa & Biegler,
¨
1991; Ozyurt & Pike, 2004). With the exception of the Lorentzian function, all others are
minimized during the optimization process. Given the form of the Lorentzian, the value is
maximized during optimization. Table 2.2 also shows the relative effect of gross error on the
reconciliation. This designation indicates the influence of a single erroneous data points on
the entire function. A highly sensitive method, such as the Gaussian, will allow gross error
to influence the adjustments throughout the circuit. Alternatively, low sensitivity methods,
such as the Lorentzian and Tjoa-Biegler, will localize the adjustments to the value which is
expected to be in gross error.
22 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
2.1.3 Curve Fitting and Interpolation
Regression, curve fitting, and interpolation are common engineering tools crucial to
the appropriate evaluation of mineral processing data. Within the mineral processing dis-
cipline, regression analysis has a marked influence on equipment comparison, evaluation of
performance indicators, empirical modeling, and simulation.
Curve fitting is as an application of linear optimization. Curve fitting problems arise
when a set of experimental data is to be approximated by a model of known functional form.
In the linear case, analytical equations are readily available which can optimize the function
parameters (i.e., the slope and intercept for linear functions) via least squares regression
(Faires&Burden, 2003, p. 343). Iftheproposedmodelcanbelinearized, modifiedregression
equationscanbederivedtocalculatethelinearizedparameters. Unfortunately, manyprocess
models cannot be easily linearized, and more involved curve fitting must be conducted.
The generic curve fitting process begins by proposing a functional form with one or
more unknown parameters. More parameters entail a better fit to the experimental data,
while fewer parameters typically provide more physical meaning and understanding. Initial
values for the parameters are selected, the proposed model is calculated over the range of
the experimental data, and finally, the modeled values are compared to the experimental
values. An error function is defined which quantifies this difference between the modeled
parameters and the experimental parameters. For many curve fitting problems, some version
of the squared error may be used. The mean squared error (MSE) represents the average
error of each data point and is calculated by:
n
(cid:88)
MSE = (x −y )2/n
i i
i=1
where x is the value of the experimental points, y is value of the modeled points, and n is
the number of data points. Other error quantification methods may normalize the squared
value by the absolute magnitude of the value or allow user-defined weightings.
The optimization routine progresses by minimizing the error function by changing the
value of the model parameters. Various optimization strategies are presented in Section
2.1.4. Once the error function is minimized, the calculated model parameters represent the
best fit to the experimental data (Faires & Burden, 2003). This process may also be termed
parameter optimization to better reflect the mechanics of the calculations.
Depending on the knowledge of the appropriate functional forms and the veracity of
the experimental data, a simple curve fit may not be appropriate. For example, if the
23 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
Sixth−Order Polynomial Fit
Figure 2.1: Example of undesired oscillation as a result of a high-order polynomial fit.
experimental data was gathered from a high precision land survey, a curve fit that does
not pass through every point is not valid for interpolation. As an alternative curve fitting
strategy, polynomial interpolation is often capable of producing much better approximations
when compared to other simple functions. By definition, a polynomial of degree n can
precisely represent a data set containing n+1 members (i.e. a linear function can precisely
fit two points, a quadratic function can precisely fit three points, etc.). While standard
algebraic functions are available to calculate the parameters of polynomial fits, higher order
polynomials are know to exhibit an unrealistic and undesired oscillation, as shown in Figure
2.1 (Faires & Burden, 2003). Furthermore, since higher-order polynomials require numerous
fitting parameters, the actual parameters entail less physical meaning.
Alternatively,anothermethodofexactinterpolationisbysplines. Physically,splinesare
graphical relics of hand plotting techniques which utilized French curves. Mathematically, a
spline fit uses piece-wise polynomial approximation to precisely estimate a set of experimen-
tal data. A spline fit provides a unique polynomial for each consecutive pair of points. A
first-order or linear spline is constructed by simply connecting the data point-to-point with
straight lines. The disadvantage with linear splines is that the resulting piecewise function
may have sharp corners and a thus a discontinuous derivative. Instead, the most common
24 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
spline is the cubic spline which connects pairs of points with cubic polynomials. This ap-
proach provides a continuous first and second derivative along the data range, producing a
smooth, non-sharp curve.
To determine the cubic spline, four parameters must be solved for each pair of points
(a cubic polynomial interpolation requires four parameters). The challenge in constructing
splines is that while the interior polynomials have sufficient data to be fully constrained,
information on the slope at the boundary conditions is lost. Consequently, the boundary
slopesmustbeestimated. Manymethodsareavailable(Faires&Burden,2003),thoughthree
are common: (1) the end cubics approach linearity, (2) the end cubics approach parabolas,
and (3) the final slopes are a linear extrapolation from the neighboring points. A pragmatic
guide to spline construction has been presented by Gerald and Wheatley (1994, p. 200).
Figure 2.2 shows an arbitrary data set that has been estimated using various regression,
curve fitting, and spline approximation techniques.
2.1.4 Optimization
Numerical optimization is a branch of engineering mathematics and computational re-
searchwhichisconcernedwithidentifyingextremavaluesoffunctions. Classicaloptimization
problems are formulated by three mathematically defined parts: (1) the design vector, (2)
the objective function, and (3) the constraint vector. The design vector contains all of the
parameters which can be controlled by the designer. Often, a starting guess is required to
initialize the design vector. The objective function defines the value which is to be mini-
mized or maximized. This function is defined in terms of the elements of the design vector.
Finally, optimization problems may be constrained or unconstrained, depending if physical
or other limitations must be applied to various elements of the design vector. If the problem
is constrained, these constraints are formulated in vector form as a function of the elements
of the design vector.
A solution which meets the entire constraint set is said to be a feasible solution (Foulds,
1981). Optimization functions must be stated in the form of a single objective function. If
more than one extrema outcome is desired (e.g. maximize profits while minimizing investor
risk), a weighting factor may be used to combine both goals into a single objective function.
Alternatively, the most important criteria may set by the objective function, while simply
imposing constraints on the secondary objectives (Bhatti, 2000).
Most contemporary optimization techniques may be classified as either enumerative,
random, or calculus-based (Foulds, 1981; Goldberg & Holland, 1988). Enumerative, or
25 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
direct-search, techniques are the most straightforward. The solution space of the design
vector is partitioned as a grid, and every possible combination of parameters is tested and
compared to determine the optimum configuration. Purely random techniques (i.e. random
walks) institute a similar methodology, but the solution space is randomly sampled in an
attempt to hasten the calculation time. Nevertheless, both direct-search and random op-
timization techniques are grossly inefficient and require substantial computation resources
when considering even modest problems (Goldberg & Holland, 1988).
Calculus-based methods, such as linear programming and the simplex method rely on
the gradient of the objective function to establish the search direction and step size. In prac-
tice, these search methods are akin to hill-climbing: the crest is determined by traversing in
thedirectionofthesteepestslopeuntilonebeginstodescend. Theseandothercalculus-based
methods generally rely on known or estimated derivative and second derivative information
in order to establish the slope gradients. As a result, the derivatives must generally be con-
tinuous and defined over the anticipated design vector range. With the additional auxiliary
information, calculus-based methods are substantially more efficient than enumerative and
random techniques; however, the added complexity results in a loss of robustness. Many
calculus-based methods tend to isolate local, rather than global extrema, especially if the
technique is ill-suited for the problem type (Bhatti, 2000). Furthermore, when the objec-
tive function is nonlinear, quadratic programming or other classical optimization methods
(such as Newton’s method) must be applied. Further subclasses of calculus-based optimiza-
tion techniques are available for integer or binary-constrained design vector values (Foulds,
1981).
Since many conventional optimization techniques are limited by computation ineffi-
ciency, lack of robustness, and solution divergence, research has attempted to redefine the
optimization paradigm by abandoning the calculus-based influences on which most tradi-
tional optimization theory is based. Holland (1975) created genetic algorithms to optimize
functions in a manner similar to the evolutionary processes found in nature. Genetic al-
gorithms utilize stochastic processes to “evolve” a design vector until an optimal solution
is reached. Unlike calculus-based methods, genetic algorithms do no require any auxiliary
information, and thus, even the existence of a first derivative is not necessary to efficiently
obtain a solution. Genetic algorithms operate analogously to natural selection and biological
evolution (Goldberg & Holland, 1988; Holland, 1992).
Genetic algorithms denote a substantial increase in solution robustness, especially in
nonlinear and otherwise complex search spaces. In this regard, genetic algorithms differ
from other searches in that they: initiate from a population rather than a single point, rely
27 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
simply on the value of the objective function rather than auxiliary information, and they
utilize stochastic rather than deterministic operations (Goldberg & Holland, 1988).
2.2 Circuit Modeling and Simulation
2.2.1 Modeling of Process Unit Operations
Over the last 40 years, modeling and simulation of unit operations has advanced as one
of the primary research areas in the discipline of mineral processing. In general, modeling
referstotheprocessofdescribingaphysicalphenomenonintermsofmathematicalequations,
while simulation refers to the solving of those equations to predict potential outcomes. In
the case of mineral processing, a process model is used to predict the concentrate and tailings
product from a given unit operation when provided descriptions of the feed and operational
parameters.
Most process models are classified in terms of the model fidelity, earning the distinction
of either an empirical, phenomenological, or theoretical model. For much of the last century,
empirical models have found the most widespread usage and availability (Wills & Napier-
Munn, 2006). From a mathematical perspective, an empirical model does not actually con-
sider the physical subprocesses of the separation system but is simply a curve-fit which seeks
to consolidate experimental data. Despite their simplicity, empirical models are especially
useful, since they are relatively easy to construct and apply (Napier-Munn & Lynch, 1992;
Wills & Napier-Munn, 2006). Furthermore, the functional form of the resulting curve fit may
indicate the ultimate form of a more theoretical model. The only requirements for empirical
models are ample experimental data and curve-fitting or regression software. Unfortunately,
empirical models are prone to catastrophic failure if simulation seeks to extrapolate beyond
the range of experimental data used to build the model. A common example of this failure
is given by the extrapolation of power versus mill load data in a ball mill grinding system
(Figure 2.3).
Extrapolation fallacies, such as the one presented in Figure 2.3, illustrate the lack of
predictivecapacityinherenttodata-drivenmodels. Ontheotherendofthefidelityspectrum,
purely theoretical models (or transport phenomena models) require no initial experimental
data and are entirely predictive when based on sound fundamental knowledge (Napier-Munn
& Lynch, 1992). Unfortunately, the unit operations in the mineral processing industry are
vastly complex and incorporate numerous physical and chemical subprocesses. Additionally,
28 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
comprehensive theoretical models must know or be able to predict the entire liberation state
of each particle in the system, since most separation principles are largely dependent on
liberation. Consequently, the development of comprehensive theoretical models has been
deterred for most mineral processing systems; however, recent attempts have been made to
model the flotation system from first principles (See Section 2.2.3).
In order to balance the benefits and detriments to either modeling paradigm, recent
effort has been placed in phenomenological modeling. Generally, the phenomenological ap-
proach considers the various physical subprocesses to an extent in identifying the functional
forms; however, experimentation, rather than fundamental science, is used to finalize the
model parameters. Since these models are in part based on scientific principles, they are
much less sensitive to catastrophic failure than empirical models. As a result, these models
have found widespread integration in process scale-up and circuit simulation (King, 2001;
Wills & Napier-Munn, 2006). The most common phenomenological approach, the popula-
tion balance approach, essentially tracks the transport of individual particles throughout a
separation system (Himmelblau & Bischoff, 1968). This modeling approach can be conve-
niently applied to dynamic or steady-state systems and provide fundamental insight when
the model is well developed and vetted.
The most fundamental form of the population balance model states that the accumula-
tion of particles is equal to the input minus the output plus net generation. For population
balancemodels, thisgeneralarticulationaccountsforboththetransportinphysicalspace, as
particles move throughout a system, as well as property space, as the characteristic property
of individual particles changes within a process unit. Mathematically, the general micro-
scopic population balance model is given by:
J
dψ d d d (cid:88) d
˙ ˙
+ (v ψ)+ (v ψ)+ (v ψ)+ (v ψ)+D−A = 0
x y z j
dt dx dy dz dς
j
j=1
whereψ isthenumberconcentrationofparticles,x,y,andz aredirectionsinphysicalspace; ς
˙ ˙
is the direction in property space; D is the rate of particle disappearance; and A is the rate of
particleappearance. Fromthisnomenclature,thefirstterm(dψ/dt)representsaccumulation;
the second, third, and fourth terms (d/dx(v ψ), d/dy(v ψ), and d/dz(v ψ)) represent the
x y z
physical transport terms; the fifth term (d/dς(vψ)) represents continuous changes in physical
˙ ˙
space; and the final two terms (D and A) represent discrete changes in property or physical
space. King(2001)hasprovidedanextensivereviewofpopulationbalancemodelsforvarious
mineral processing unit operations, including size classification, comminution, dewatering,
gravity separation, magnetic separation, and flotation.
30 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
Several commercial simulation packages are available which utilize various process mod-
els and data fitting routines. The most widely used software today include JKSimMet
(Cameron&Morrison,1991; Richardson,2002),Modsim(King,2001),andLimn(Nageswararao,
Wiseman, & Napier-Munn, 2004; Hand & Wiseman, 2010). Each of these simulation pack-
ages is currently undergoing continuous research and development.
2.2.2 Modeling Partition Separators
One basic method of empirically modeling a separation system is by a partition curve.
Partition curves were first developed by Tromp (1937) to evaluate the efficiency of various
coal cleaning methods. A basic reduced partition curve is shown in Figure 2.4.
The reduced partition curve shows the probability of reporting to the concentrate prod-
uct as a function of a dimensionless property. The property depicted on the horizontal axis
is typically the property on which the separation is based (e.g. gravity, size, magnetic sus-
ceptibility) or the particle composition. The characteristic “S” shape of the curve indicates
that the separation probability is normally distributed about a single value of the separation
property. The true value of this central property is known as the “cut-point” since particles
of this property have equal probability of reporting to either product. To normalize the
horizontal axis in the reduced curve, all values of the property are divided by the cut-point,
so that the 50% probability refers to the cut-point value of one. The ideal partition curve
(also shown in Figure 2.4), has a probability of zero up to the cut-point and a value of one
for all values greater than the cut-point. The area between the real curve and the ideal curve
is sometimes distinguished as the “error area” (Wills & Napier-Munn, 2006).
Another significant characteristic of the partition curve is the slope of the curve at the
50% probability. This value is generally termed the “separation sharpness”, though several
precise mathematical interpretations or fitting parameters are found in the literature (E ,
p
I, λ, α) (Osborne, 1988a; Leonard, 1991; King, 2001; Wills & Napier-Munn, 2006). Of
particular interest in dense-media separation is the probable error of separation or the Ecart
probable (E ) and the imperfection (I). These are given by:
p
d −d
75 25
E =
p
2
E
p
I =
d −1
50
where d , d , d represent the property value at 25%, 50% , and 75% recovery, respectively.
25 50 75
The two remaining characteristics of the partition curve are the high and low bypass
31 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
values. These values are generally represented as the probabilities where the curve closes at
the high and low extremes of the property values.
Though partition curves were originally developed to evaluate equipment performance,
they may also be used for empirical simulation. The mathematical parameters of the parti-
tion curve are often independent of the feed composition and unique to specific separation
units. Experimental testing can be used to identify the curve parameters at standard opera-
tionalconditions, andfurthertestingcanderiveempiricalrelationshipstorelatethepartition
function parameters to specific operational and equipment variables. Researchers have iden-
tified four qualities preferred in all proposed partition functions: (1) the existence of natural
asymptotes, (2) the ability to express asymmetry about the cut-point, (3) mathematical
continuousness, and (4) parameters which can be easily estimated by accessible methods
(Stratford & Napier-Munn, 1986; Wills & Napier-Munn, 2006).
2.2.3 Kinetic Modeling of Flotation
Considerable effort has been placed in developing predictive models for flotation per-
formance. The cause of this interest is likely a result of its dominance in the mineral pro-
cessing industry as a separation process as well as the incredible complexity, plurality, and
interdependence of the relevant subprocesses. To date, comprehensive and purely theoretical
flotation models remain immature, though several recent authors have provided a foundation
for this work (Sherrell, 2004; Do, 2010; Kelley, Noble, Luttrell, & Yoon, 2012). Nevertheless,
empirical and partially phenomenological models have been well vetted and used extensively
for many industrial simulation purposes. From a microscopic perspective, the complex me-
chanics of froth flotation may be described by several transport mechanisms. The most
recent studies include the rate of pulp to froth transport by bubble attachment, the rate
of material drop-back from the froth, the rate of water drainage from the froth, and the
rate of entrainment. Most modeling approaches attempt to quantify the specific rates and
interaction of these mechanisms.
Many researchers have empirically witnessed the kinetic behavior of bulk flotation re-
covery as a function of time. This evidence has prompted many to model flotation as a
first-order rate process analogous to a chemical reaction (Sutherland, 1948; Tomlinson &
Fleming, 1965; Fichera & Chudacek, 1992). Other order rate models have been postulated,
but few have gained as much widespread applicability as the first-order model. The first-
orderratemodeldefinesaconstantproportionalitybetweenthedepletionofmineralparticles
33 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
(dN/dt) and the number of particle in the system (N):
dN/dt = kN (2.1)
where k is a proportionality or rate constant.
From the first-order assumption, Equation 2.1 may be solved at various boundary con-
dition to determine the recovery (R) as a function of flotation time (τ) for both a plug-flow
reactor (Equation 2.2) and a perfectly-mixed reactor (Equation 2.3), depending on the res-
idence time distribution (Levenspiel, 1999). These equations have been used to model the
flotation process in scaling from a laboratory to an industrial flotation unit:
R = 1−e−kτ (2.2)
Plug
kτ
R = . (2.3)
Mixed
1+kτ
Several modifications to these models have been proposed to incorporate a theoretical
maximum recovery and a flotation delay time (Dowling, Klimpel, & Aplan, 1985; Gorain,
Franzidis, Manlapig, Ward, & Johnson, 2000; Sripriya, Rao, & Choudhury, 2003). Ad-
ditionally, some researchers have suggested that industrial cells (especially column cells)
substantially deviate from the perfectly-mixed assumption (Dobby & Finch, 1988; Luttrell
& Yoon, 1991). Coinciding with the aforementioned chemical reaction analogy, these au-
thors have suggested the axially-dispersed reactor model (ADR) which defines recovery as a
function of the degree of axial mixing, via the Peclet number (Pe) (Levenspiel, 1999):
4Aexp{Pe/2}
R = 1−
ADR (1+A)2exp{(A/2)Pe}−(1−A)2exp{(−A/2)Pe}
(cid:112)
A = 1+4kτ/Pe.
For extreme values of the Peclet number, the behavior of the ADR model approaches
that of the perfectly-mixed and plug-flow models (Equations 2.2 and 2.3). For high Peclet
numbers (> 99), plug-flow behavior is experienced, while low Peclet number (< 0.001)
produce perfectly-mixed results. Figure 2.5 compares these three rate recovery models. The
ADR model is shown for two different Peclet numbers.
While the general rate-based approach to flotation modeling has substantial empiri-
cal justification, researchers and practitioners have realized that not all particles of a given
mineral in a flotation system exhibit the same kinetics. This observation has led to the
34 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
development of distributed parameter rate models (Fichera & Chudacek, 1992). Various
researchers have identified properties to justify the distribution, with one of the more preva-
lent parameters being particle size. Gaudin, Schuhmann Jr, and Schlechten (1942) first
experimentally measured the dependence of flotation rate on particle size, noting the sub-
stantial degradation in flotation rate for large particles. This observation was later given a
more thorough theoretical consideration which investigated the streamline hydrodynamics
for given bubble and particle sizes (Sutherland, 1948).
A more general approach to model parameterization was conducted by Imaizumi and
Inoue (1965). This modeling approach considers distributed floatability classes which lump
together the combined effects of particle size, shape, and other surface properties. Most
contemporary flotation models include some form distributed flotation classes, often in the
form of a double distributed model which includes size and floatability (Fichera & Chudacek,
1992).
Further attempts to add fundamental insight to the empirical first-order observation
have led many to propose analytical expressions for the flotation rate constant. These
expressionsgenerallysuggestastrongdependenceofgasdispersionontheflotationrate. One
suchmodelsuggeststhattherateconstantisproportionaltothebubblesurfaceareaflux(S )
b
and a generic probability or collection efficiency term (P) (Jameson, Nam, & Young, 1977;
Yoon & Mao, 1996; Gorain, Franzidis, & Manlapig, 1997; Gorain, Napier-Munn, Franzidis,
& Manlapig, 1998):
k = 0.25PS .
b
Here, S is a derived term which defines the degree of aeration present in the cell (Finch &
b
Dobby, 1990; Gorain et al., 1997; Gorain, Napier-Munn, et al., 1998). S mathematically
b
balances the superficial gas velocity (J ) and the mean bubble size (d ):
g b
6J
g
S = .
b
d
b
This model has been very successful at normalizing flotation performance when the
gas dispersion variables are known. The linear k − S relationship has been experimen-
b
tally verified for various minerals and at various scales (Gorain, Napier-Munn, et al., 1998;
Hernandez-Aguilar, Rao, & Finch, 2005; Noble, 2012). The overall acceptance of this model
has led to several comprehensive studies in characterizing and quantifying gas dispersion
in flotation cells (Finch, Xiao, Hardie, & Gomez, 2000; Tavera, Escudero, & Finch, 2001;
Kracht, Vallebuona, & Casali, 2005; Schwarz & Alexander, 2006; Miskovic, 2011).
Other models have proposed a purely theoretical expression for k, based on surface
chemistry and hydrodynamic variables (Luttrell & Yoon, 1992, 1991; Mao & Yoon, 1997;
36 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
Sherrell, 2004; Do, 2010). These models were originally applicable for predicting rate con-
stants under quiescent conditions, such as in column cells. More recently, the fundamental
models have addressed the turbulent hydrodynamic conditions found in conventional cells.
Additionally, these approaches have added fundamental or semi-empirical models to describe
material drop-back and fluid drainage from the froth. All of these fundamental models are
based on a compartment paradigm which independently defines the flotation rate constant
as a combination of probabilities of collision (P ), attachment(P ), and detachment (P ):
c a d
k = PS = (P P (1−P ))S .
b c a d b
In these models, the probability terms have been analytically defined using fundamental
hydrodynamic variables (such as turbulent kinetic energy) and surface energies calculated
from the Van Der Waals, electrostatic, and hydrophobic force components. The extended
DLVO theory is invoked to define the composite interaction of these forces (Yoon & Wang,
2007; Kelley et al., 2012). Ultimately these fundamental models predict flotation perfor-
mance as a function of intensive mineral properties and machine characteristics which are
either well known or do not change with scale (Kelley et al., 2012).
In addition to the aggregate recovery models, other recent studies have focused on the
inclusion of other transport mechanisms, such as froth recovery and entrainment. Such
models consider flotation to be a two stage process, while modeling the pulp and the froth as
independent reactors. Most of the pure pulp recovery models invoke analytical forms similar
to the rate models presented above with some empirical correction to negate the ever-present
froth effects (Gorain, Harris, Franzidis, & Manlapig, 1998; Vera et al., 2002).
Similar to pulp recovery, froth drop-back has been identified as a rate process which
can be modeled as a plug-flow reactor considering the interaction of a rate constant and
residence time (Equation 2.2) (Gorain, Harris, et al., 1998). When the independent froth
(R ) and pulp (R ) recoveries are known, the overall recovery may be calculated by (Finch
f p
& Dobby, 1990):
R R
f p
R = .
1−(1−R )R
f p
Since the identification of the two compartment flotation modeling approach and the
kinetics of froth drop-back, researchers have attempted to gain further fundamental, espe-
cially with regard to froth residence time (Vera et al., 2002). Most simply, froth residence
time can be determined by dividing the froth height by the superficial gas rate for the cell
(τ = h/J ) (Mathe, Harris, O’Connor, & Franzidis, 1998). Since this calculation does not
f g
accommodate for different cell geometries and froth travel distances, many have proposed
37 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
revisions to the initial calculation, while retaining the kinetic plug-flow model. Gorain, Har-
ris, et al. (1998) suggest the inclusion of the distance from the center of the flotation cell
to the launder, while Lynch, Johnson, Manlapig, and Thorne (1981) base the calculation on
the volumetric slurry flow through the froth.
2.3 Circuit Analysis and Optimization
2.3.1 Design Principles
Since staged separation is often necessary to meet final product requirements, circuit
designers must designate the flow configuration between various process units. This set of
decisions, constituting circuit design, may involve the selection of different unit operations,
different equipment models or sizes, different operational parameters, and different unit in-
terconnections. To assist circuit designers, researchers have attempted to establish standard
design methodologies which involve various analytical techniques and tools. These tools are
typically guided by some optimization strategy and a generic process model applicable for
the given separation.
Circuit design analysis and optimization methods can be described on a continuum
scale depicting the level of direct mathematical involvement and intensity (Figure 2.6). On
the lower portion of the scale are purely heuristic methods. These circuit analysis techniques
utilize rules and guidelines which may or may not be based on sophisticated mathematical
integration. Conversely, purely numerical optimization routines define the higher portion
of the scale. These methods have incorporated various optimization algorithms, including
linear programming, non-linear programing, gradient-based optimization, and genetic opti-
mization. Both extremes of this scale introduce numerous advantages and disadvantages.
Recent trends are seemingly favoring high-tech numerical algorithms to accommodate the
nonlinear, discontinuous design parameters associated with separation circuits; however,
contemporary industrial practice still favors more heuristic solutions. Consequently, several
active research projects are developing strategies at all points along the continuum. This sec-
tion will review the state-of-the art in these optimization strategies while noting the merits
and drawbacks to the various methods.
38 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
2.3.2 Classic Heuristic Methods
In general, the term heuristic refers to a learned behavior derived from a set of loosely-
defined rules. With reference to separation circuit design, a heuristic approach refers to
the use of established operator practices, accepted “rules-of-thumb”, or quantitative design
regulations when generating preliminary alternatives (Wills & Napier-Munn, 2006). In this
review, the pure heuristic approaches presented in the literature have been classified into
two groups: (1) those that simply impose design principles from empirical observation and
(2) those which derive the heuristics from process models. While the heuristic methods
appear less scientifically-sound than high level analytical and numerical approaches, their
lack of sophistication is convalesced in their ability to accommodate operator experience and
common-sense design constraints. Additionally, when well-formulated and valid, heuristics
are the most easy to implement, since no analytical or computational resources are required.
Unfortunately, many reported heuristics are dependent on the process model validity, or they
are only applicable in the specified site conditions. Furthermore, model-based heuristics may
provide conflicting solutions, if all of the rules cannot be satisfied simultaneously.
Much of preliminary circuit design is driven by trial-and-error and accepted industry
practices (Lauder & McKee, 1986; Wills & Napier-Munn, 2006; Lucay, Mellado, Cisternas,
& Galvez, 2012). This approach has driven the industry for much of the known past and
continues to be the method of choice for many circuit designers. Malghan (1986) notes that
regional bias may also influence the general paradigm or approach to circuit design. At the
time of his publication, poly-metallic sulfides and porphyry copper deposits were primarily
processed by bulk flotation in the Americas, sequential copper-lead-zinc flotation in Aus-
tralia, and low-throughput, complex circuits in Scandinavia. Furthermore, Malghan claims
that open-circuits (those lacking recycle streams) were becoming increasingly common. The
author also suggests simple design principles loosely based on a kinetic model of flotation.
For example, high-grade material is claimed to float quicker than lower-grade middling ma-
terial. In the instances where the rougher concentrate from the first cell meets product
specifications, the floated material may be immediately directed to the final concentrate.
The author also describes other common flotation practices including:
• The sizing of units based on the residence time required for desired recovery;
• The regrind of middling material produced as scavenger concentrate;
• The inclusion of sufficient units in a bank to prevent short-circuiting;
• The addition of conditioning or agitation tanks to accommodate circuit flexibility;
40 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
• The selection of the type of flotation cells, perhaps considering columns for cleaner
flotation;
• Common flowsheets for copper flotation, copper-lead-zinc flotation, molybdenite flota-
tion, nickel flotation, feldspar flotation, and phosphate flotation.
These principles are simply presented as the state of the industry at the time of publication.
The author makes no claim that the rules and design principles are applicable in all circum-
stances or that they represent optimal solutions (Malghan, 1986). Despite the age of this
study, many of these principles are still in use today.
At the same time, Lauder and McKee (1986) presented a more data-driven, empirical
critique of circuit design, focusing on the parameter of circulating loads in flotation plants.
Earlier theory had suggested that improved separation performance is achieved by increas-
ing the circulating load if the plant had the available capacity (Loveday & Marchant, 1972).
In the present study, two circuits were tested in parallel to definitively validate this claim.
Both circuits were operated identically, with the only variation being the rougher volume.
By altering the rougher volume between the two circuits, the amount of rougher concentrate
was controlled, and subsequently, varying circulating loads were produced in downstream
operations. The parallel arrangement of the circuits ensured similar chemistry and mineral-
ogy; therefore, the measured performance differences were solely attributed to the variations
in circuit design. The authors conclude that increased circulating load (and thus circuit
configuration, in general) is capable of increasing both grade and recovery simultaneously.
While other operational changes move the performance along the same grade-recovery curve,
the circuit arrangement is capable of moving the values to a new curve. Despite the plurality
of available literature on modeling and circuit design at the time (e.g., D. Sutherland, 1981;
Meloy, 1983b, 1983a; M. Williams & Meloy, 1983; M. Williams, Fuerstenau, & Meloy, 1986;
Chan & Prince, 1986), the authors argue that the lack of fundamental insight on circulating
loads and the lack of a widely accepted flotation model contribute to the overwhelmingly
empirical circuit design process. Furthermore, the introduction of either of these tools would
be beneficial in balancing the metallurgical gain of increased circulating loads with the loss
of processing resources. Their oversight of the available scientific literature does not sug-
gest deliberate neglect, but rather, the lapse is likely an indicator of the lack of technology
transfer between industry and academia prevalent at the time.
As a transitional point between the empirical and model-based heuristic methodolo-
gies, Cameron and Morrison (1991) describe approaches to both steady-state and dynamic
optimization using the technologies developed at the Julius Kruttschnitt Mineral Research
Centre(JKMRC).First, thetermoptimum isgivencontextualmeaning. Theauthorsconfide
41 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
that optimum may have different meanings on the given operation and corporate culture.
Typically, plant personnel suffer from compartmentalized optimization which may focus on
limited factors without considering downstream effects. In summary, the authors state that
unless an optimum is related to specific parameters (i.e. “optimize quarterly profits”), the
term is essentially meaningless. As a result, they show how JKSimMet and other model-
based simulation software have been used to increase performance at various operations. No
attempt is made to generalize the optimization strategies, rather the authors simply state
how their software can be adapted and applied at various sites.
Conversely,onedecadepriortoCameronandMorrison,D.Sutherland(1981)provideda
general strategy to optimize resource allocation in rougher-scavenger-cleaner flotation plants
using simulations derived from simple kinetic flotation models. In this analysis, Sutherland
assumed that the flotation process can be effectively described by a first-order rate constant
which does not change between various stages of flotation, and each individual cell was
modeled as a perfectly-mixed reactor (Equation 2.3, given in Section 2.2.3). Finally, to
simplify the calculations, Sutherland assumed a constant solids hold up throughout the
circuit. Since a generic circuit configuration was selected (rougher-scavenger-cleaner with
recycle), thesimulationswereconductedtoassesshowresidencetimeshouldbesplitbetween
the three units to yield the best separation performance.
In the study, Sutherland hypothetically established four flotation/grade classes: fast
floating mineral, slow floating mineral, fast flotation gangue and slow floating gangue. Rea-
sonable values were selected for the flotation rates and grades of these classes. Next, sim-
ulations were performed for various residence times in the rougher, scavenger, and cleaner.
To constrain the system of equations to a single independent variable, fixed values were se-
lected for the total plant size and the desired plant recovery. Hence, the size of one unit was
selected independently, and the other two were calculated from the equations describing the
full plant recovery and the total plant size. By varying the size of the cleaner bank, the final
product grade was determined as a function of the number of cleaner cells for a fixed plant
recovery and plant size. This result was plotted as product grade versus the ratio of resi-
dence times in the cleaner and the rougher. The simulations indicate that the highest grade
(and thus best separation efficiency) is achieved when the residence times in the rougher and
cleaner are nearly equal. However, in the examples shown by Sutherland, the final product
grade was highly insensitive to changes in resource allocation for most normal operational
cases. The data showed significant benefits were only witnessed when the plant was being
pushed for high recovery or when a gross imbalance existed between the stages. As a result,
Sutherland stresses that selectivity in the individual stages is much more crucial to plant
performance than simple resource allocation. Thus, optimization efforts should focus on the
42 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
study of chemical and operational parameters.
A similar model-based optimization strategy was proposed by Loveday and Brouckaert
(1995). Here, the authors based the optimization on maximizing the partition separation
sharpness (see Section 2.2.2). For case of flotation, Loveday and Brouckaert define the
separation sharpness as the slope of the recovery versus rate plot where the recovery equals
50%. A higher slope at this point indicates an increased ability to distinguish middling
material. Theauthorsshowthatinthecaseofsinglestageflotation, theseparationsharpness
is very poor. Therefore, multiple stages and increased recirculating loads are necessary to
produce acceptable separation performance in a flotation plant. The authors postulate that
the optimum recycle is achieved when the maximum slope of the recovery-rate plot is at the
R = 50% point. As shown in the paper, the maximum slope starts at R = 0% for no recycle
and increases exponentially as the amount of recirculation increases. The authors then
show the calculation steps needed to determine the appropriate recycle to achieve this goal
and the cell volumes required. The initial calculation was shown for a single-rougher cleaner
circuit, but the calculation is then repeated for several counter-current circuit configurations.
The conclusions of their paper highlight the need for extensive batch and pilot testing to
characterize the flotation kinetics and rate distribution of the ore.
2.3.3 Linear Circuit Analysis and Analytical Heuristics
The concept of linear circuit analysis (LCA) was first derived by Meloy (1983a) in order
to provide a method of optimizing multi-unit separation circuit configurations. This original
paper eventually developed into a series of publications examining various aspects and ap-
plications of the methodology. The impact of these papers in the literature spanned nearly
two decades with much of the original developments occurring in the early 1980’s. In the
groundbreaking work, a series of circuit design principles were generated from fundamental
observations on the algebra concerning binary separation units. First, a separation unit’s
yield of a particular particle type is defined by a transfer function (or probability, P). The
mass of material in the concentrate stream is simply the product of the yield and the feed
mass (PF), while the transfer function to the tailings stream constitutes the remaining ma-
terial (1−P). By extending this algebra over many units, the recovery for the entire circuit
may be analytically defined in terms of each unit’s recovery. Figure 2.7 shows examples of
this algebra applied to common circuit configurations. The power of all LCA applications is
then derived from the analytical solution.
The LCA methodology is constrained by linearity assumptions. Meloy (1983a) presents
43 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
a formal definition of these restrictions, but in summary, linearity states that a unit’s par-
tition curve is not influenced by feed composition or feed rate. While this assumption is
not wholly valid for operating units, Meloy states that during the design phase, a larger or
smaller unit may be selected to accommodate the required tonnages. Thus, this approach
is valid for new circuit designs. Furthermore, the same author has suggested that literature
contains support for linearly operated process units and that experimental investigations
have confirmed linearity in some cases (Harris & Cuadros-Paz, 1978; M. Williams & Meloy,
1983; M. Williams et al., 1986).
In the original LCA paper, the analytical solution is used to determine the relative
separation sharpness of a circuit to a single unit (Meloy, 1983a). The slope of the partition
curve is used as a general indicator of separation capability, and Meloy shows that this
slope can be determined for the full circuit by calculating the derivative of the circuit’s
analytical recovery at a value where the circuit recovery equals 50%. From this method,
the incorporation of circulating loads are shown to increase separation sharpness; however,
staged units may affect the cut-point of partition-based separators, even if all units are
operating similarly. Finally, Meloy presents a means of analyzing unit bypass, such as the
entrainment phenomenon witnessed in flotation (King, 2001; Wills & Napier-Munn, 2006).
Meloy (1983b) later expanded upon the analysis procedure to define a methodology for
circuit optimization. In this paper, four functions fundamental to separation processes are
describedmathematically: feed,selectivity,composition,andcriteria. Theformerthreefunc-
tions are defined by three variable types, particle property, operational, and compositional,
though not all functions are defined by all variables. Finally, the criteria function defines the
value to be optimized, typically grade or recovery. The optimization then proceeds by (1)
defining the criteria function in terms of the three other functions; (2) differentiating with
respect to the operational variables; (3) setting the resulting derivative equal to zero; and
(4) solving for the operational derivatives. If more than one process variable exists, the pro-
cedures may be expanded by taking partial derivatives of the criteria function with respect
to each operational variable. This array of equations is then set equal to zero and solved
simultaneously. Meloy states that the required data are easily determined by assays or other
experimental studies. Furthermore, the process may be applied to various mineral processing
unit operations, including flotation, gravity separation, magnetic and electrostatic circuits.
As a final contribution, Meloy notes that the optimum grade and the optimum recovery
never occur at the same operational point.
TheprinciplesofLCAwerealsousedtoanalyzedynamicflotationcellmodels(M.Williams
& Meloy, 1983), multi-feed multistage separators (M. Williams et al., 1986), and the effect
45 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
of density variations in heavy media circuits (Meloy, Clark, & Glista, 1986). First, the ana-
lytical circuit solutions derived from LCA were coupled with a dynamic, rate-based lumped
parameter flotation model to analyze the dynamic response of flotation circuits to sinusoidal
feed variations (M. Williams & Meloy, 1983). The authors compared the dynamic behavior
of counter-current and co-current circuits, concluding that co-current circuits are better in
all applications. This result was based on the deficiencies of counter-current circuits , includ-
ing larger required volumes and longer dynamic response times. Finally, co-current flotation
banks were shown to be non-oscillatory, while counter-current circuits exhibit osculation
frequencies that increase with flotation rate and retention time.
Another paper in the LCA series addresses the optimization of a rougher-scavenger-
cleaner dense-media coal cleaning circuit (Meloy et al., 1986). Here, the authors seek to
address whether the media density in multistage coal cleaning circuits can be optimized
to improve overall performance. The authors note that rougher-scavenger-cleaner circuits
are not common in coal preparation, especially in gravity separation circuits. This design
principle is likely supported by the relatively high separation efficiencies naturally found in
dense-media vessels (Osborne, 1988a, p. 259; Wills & Napier-Munn, 2006, p. 260). Never-
theless, the authors conduct the optimization exercise utilizing a standard partition function
for the selection function of the dense-media separator. This partition function is dependent
on the separation sharpness and the dense-media cut-point. The LCA methodology is used
to determine the product function for the entire rougher-scavenger-cleaner circuit, and an
incremental approach (by taking the second derivative of the analytical expression) is used
to determine the affect of the gravity set point in each unit on the final recovery, grade,
concentrate, and circulating load. This analysis is repeated and the results are plotted as
a function of the units’ original sharpness value. The results show that the best benefit
occurs at relatively low sharpness values. Furthermore, additional benefits can be experi-
enced by increasing the scavenger gravity and decreasing the cleaner gravity. This result
is expected, since such modifications will increase the circulating load to the rougher and
increased circulating loads are known to enhance separation performance.
Collectively, the mathematical approach of LCA is used to derive a common set of
principles which guide separation circuit design. These principles have been summarized,
most recently by McKeon and Luttrell (2005, 2012):
• Only circuit configurations involving recycle to prior units are capable of increasing
the separation sharpness;
• Perfect separation is obtainable as the number of units down the scavenger and cleaner
branch approach infinity;
46 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
• Products generated after the first separator should not cross between the scavenger
and cleaner branches of the circuit without first being recycled through the initial
separator;
• Units positioned off of the main scavenger and cleaner legs do not increase separation
sharpness.
These authors go on to show the application of circuit analysis in evaluating and recon-
figuringaheavymineralsandsspiralseparationcircuit. Byadaptingandimplementingthese
principles the authors were able to simplify the plant configuration by reducing the number
of spirals from 686 to 542. Furthermore, the new circuit was able to produce a higher grade
material at an increased recovery. Previously, concentrate material was reprocessed seven
times in order to produce the specified grade at a 93.0% recovery. After the modification,
the circuit was able to obtain a 94.7% recovery at the desired grade in only a single pass
(McKeon & Luttrell, 2012). In other instances similar performance gains have been obtained
by implementing circuit analysis principles to coal spiral separators (Luttrell, Kohmuench,
Stanley, & Trump, 1998) and flotation columns (Tao, Luttrell, & Yoon, 2000).
Despitethisevidenceforcircuitanalysisandthevalueofwellconfiguredrecyclestreams,
someauthorshaveignoredtheseconsiderationsintheircircuitdesigns. Inparticular, Poulter
(1993) has described the overhaul of the zinc circuit at the Rosebery concentrator. Among
other advancements involving process mineralogy and feed characterization, the author de-
scribed a “circuit simplification” process which occurred during 1992. The prior flotation
circuit, shown schematically in Figure 2.8a involved three cleaner stages and counter current
flow, recycling each tailings product to the feed of the prior unit. Poulter indicates several
deficiencies inherent to this circuit, including: complicated process control, high circulating
loads, inhibited performance of fast floating material, and little perceived benefit from the
latter cleaner states.
After 1993, the operators installed modifications to the circuit, including split condi-
tioning for the feed and regrind product, froth booster plates, and a revised flowsheet (shown
schematically in Figure 2.8b). Worth noting, when evaluated by the LCA methodology, the
modified circuit represents a much weaker configuration. According to Meloy (1983a), the
modified circuit should witnessed inhibited separation capability. Nevertheless after describ-
ing these modifications, the author states that the new circuit design has increased opera-
tional ease and metallurgical performance. The data presented by Poulter (Figure 2.9) shows
increased grade in the latter months of the study; however, further meta-analysis shows that
the new circuit experienced no significant increase in actual separation efficiency (Figure
2.10). While the author has noted the achievement of several auxiliary goals (i.e increased
47 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
Feed Feed
Rougher 1 Rougher 2 Scavenger
Rougher 1 Rougher 2 Scavenger 1 Scavenger 2
Tailings
Tailings
Cleaner 1
Cleaner 2 Cleaner 1
Cleaner 3 Concentrate
Concentrate
(a) Original Rosebery Circuit (b) Modified Rosebery Circuit
Figure 2.8: Schematic circuit configurations for Rosebery flotation plant, circa 1992-1993.
Flowsheet after (Poulter, 1993).
process control, reduced uncertainty, reduced flowsheet complexity), increased metallurgical
performance, should not be included. The gains in equipment retrofitting were seemingly
canceled by the reduction in circuit strength. Despite the errant conclusion, Poulter does
raise the concern that auxiliary process goals (e.g. flowsheet complexity) sometimes trump
simple septation capacity. Currently, LCA does not include a methodology for addressing
these alternative goals.
To supplement their core work in LCA, M. Williams and Meloy later suggested two
alternative approaches to circuit configuration design. Both methods were derived from
theoriessimilartoLCA;however, theauthorssoughttoreducethecumbersomemathematics
associated circuit analysis. The first of these methods presents precise definitions for the
common colloquial circuit functions: roughers, scavengers, and cleaners (M. Williams &
Meloy, 1989). According to M. Williams and Meloy, a rougher is unit whose feed is the
circuit feed, a cleaner is a unit fed by a concentrate stream, and a scavenger is fed by a
tailings stream. In most processing plants, a single unit will fulfill several of these functions.
For example, the rougher in a standard rougher-scavenger-cleaner recycle circuit (Figure
1.5c) is actually a rougher, scavenger, and cleaner, since it is processing feed, concentrate,
and tailings from various units. M. Williams and Meloy argue that a better approach is to
48 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
87
86
85
84
83
82
81
80
06/92 07/92 08/92 09/92 10/92 11/92 12/92 01/93 02/93 03/93 04/93 05/93 06/93 07/93
)%(
ycneiciffE
noitarapeS
nZ
New Circuit
Original Circuit
Circuit Modifications
Date
Figure 2.10: Calculated separation efficiency at the Rosebery concentrator during a period
of circuit modification.
design circuits so that the individual unit operations are only fulfilling a single function. This
strategy promotes specialized operation for individual cells, since each is pursuing a different
process goal. Furthermore, by developing circuits which exploit specialized functions, the
feed loading to each unit is substantially reduced. In the paper, the authors use LCA to show
fourequivalentcircuits, eachrepresentingahigherdegreeofspecialization. Theauthorsthen
usetheanalyticalsolutiontoshowthedegreetowhichspecializationcanreducefeedloading,
and in many cases, increase metallurgical performance (M. Williams & Meloy, 1989).
The second alternative circuit design approach defined mathematical solutions to three
circuit design criteria: (1) the required number of stages, (2) the stage where the feed enters
the circuit, (3) the configuration of the product streams (M. Williams & Meloy, 1991). This
approach begins by assuming a generic cleaner-type circuit of indeterminate size, with each
concentrate advancing serially to the next unit. Tailing streams are recycled to a prior point
in the circuit, such that the the grade of the recycle stream is greater than or equal to the
grade at the point of reentry, a principle originally suggest by Taggart, Behre, Breerwood,
and Callow (1945). By establishing this generic superstructure, the three design criteria may
be solved algebraically if four desired/operational parameters are specified: (1) the desired
global product recovery, (2) the desired global ratio of product to waste, (3) the product
to waste ratio achievable for each unit, and (4) the feed component ratio. These algebraic
functions are intended to guide an initial circuit design, since they will inherently produce
50 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
non-integer values. By rounding and manipulating different combinations of values, the
design criteria which achieve the desired results may be determined. These configurations
constitute the “feasible designs” from which a more thorough optimization or design process
may originate (M. Williams & Meloy, 1991).
A later reaction paper by Galvez (1998) proposed slight alterations to the “feasibility
method” employed by M. Williams and Meloy. This paper begins by describing poten-
tial pitfalls to the original feasibility method, such as: the assumption of identical transfer
function for each unit, the conversion of recycle streams to waste streams when the recycle
parameter was ambiguous, and the lack of a standard methodology when non-integer values
were calculated. Rather than first generically defining the number of stages for the entire
plant, Galvez assumes that each circuit will have one rougher stage, and an indeterminate
number of scavenger and cleaner stages. The number of units in each stage is calculated in-
dependently using equations which relate the waste specification to the number of scavenger
stages and the concentrate specification to the number of cleaner stages. Next, the reen-
try point of the concentrate waste streams is determined by implementing the same recycle
principle proposed by Taggart et al. (1945) and employed by M. Williams and Meloy (1991):
namely, the waste stream must be recycled back such that it enters a stream with a lower
or equal grade. An analogous approach is taken for the reentry of the scavenger concentrate
products. After the calculation of these four parameters Galvez proposed three rules to guide
selection when non-integer values are calculated: (1) the number of recycle stages must be
greater than or equal to one, (2) all recycle streams must be recycled into the circuit (i.e.
no open circuits), and (3) values for the number of cleaner and scavenger units should be
rounded up, unless they are extremely close to the floor value. The final rule provides added
conservatism since the initial calculations do not consider the influence of recycle streams.
Even after these rules are applied, several feasible solutions may persist. In these cases,
Galvez suggests either an economic analysis or a decision based on the separation factor, the
beneficiation ratio, or the valuable component recovery.
Noting the utility of LCA and the analytical solution, M. C. Williams, Fuerstenau, and
Meloy (1992) derived a methodology to rapidly produce analytical solutions to separation
circuits. In this paper, the authors note the drawbacks to traditional circuit analysis, namely
the cumbersome required mathematics, as well as the deficiencies of numerical optimization
approaches, such as the inability to introduce common sense principles from the designer.
This approach, tailored from the principles of graph theory, provides a technique of relating
the recovery of individual units to the full circuit recovery. In their nomenclature, separation
units are designates as modules which are connected by branches by identifying loops in the
circuit configuration, the overall circuit recovery may be calculated by a standard approach.
51 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
The authors present an example from the literature which contains five units and required
the simultaneous solution of 12 equations (Davis, 1964) . M. C. Williams et al. suggest that,
when mastered, the graph theory approach should take ten minutes for a similarly-sized
problem.
A recent adaptation of LCA is sensitivity analysis (SA) (Lucay et al., 2012). The au-
thors present SA as an ideal trade-off between empirical and heuristic insight and numerical
optimization strategies. Since global optimization through experiments is nearly impossible,
SA is used to determine the nodes in the circuit which produce the greatest impact. Subse-
quently, empirical insight and experiments can be used to optimize or improve performance
at those nodes. In SA, each unit is examined individually and the final results are compared
to determine the most influential unit. As in LCA, the first required step is to determine
an analytical expression for the circuit yield in terms of each unit operation’s independent
recovery function. In defining this expression, terms referring to units not under scrutiny
are lumped into a single, constant parameter. By mathematically manipulating this global
recovery function, an expression can be determined which indicates if a species is being di-
luted or concentrated, depending on the value of the lumped parameter. Next, the partial
derivative of the global recovery function is determined with respect to the recovery of the
unit under scrutiny. The magnitude of this partial derivative is then determined and plotted
for various expected values of the individual recovery functions. Local minima and maxima
in the plots are noted. This process is then repeated by taking the partial derivative with re-
spect to each unit, the behavior of the plots are identified, and the overall magnitude of each
partial derivative is compared to determine the unit with greatest influence on the circuit.
Unfortunately, the behavior of the sensitivity graph changes, depending on the performance
of other units in the circuit. However, if the general behavior of an operating circuit is
known, SA may be used to determine the unit which merits the most attention. Once the
operation of this unit is altered, the circuit will produce a new high sensitivity unit and the
process may be repeated. Lucay et al. conclude the paper by demonstrating the method on
a hypothetical flotation circuit using a standard perfectly-mixed reactor model.
2.3.4 Numerical Circuit Optimization Methods
Several authors over the last 25 years have used calculus-based optimization and, to a
limited extent, genetic algorithms in the circuit design problem. A comprehensive review on
the application of numerical optimization to circuit design has been recently presented by
Mendez, Galvez, and Cisternas (2009). These authors present circuit design as a synthetic
design process which can (and potentially should) be approached as a traditional engineering
52 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
optimization problem. However, as common to many synthesis problems, the initial solu-
tion approach is usually trial-and-error. The industrial result has been non-optimal circuits
which later require substantial plant modification. These retrofits are still based on non-
optimal solutions which in turn introduce new deficiencies. Alternatively, limitations to the
optimization strategies generally arise from insufficient resources, unrealistic process models,
and sporadic laboratory data. Mendez et al. note that historic strategies used circuit sim-
ulation to drive the trial-and-error process. Many times these solutions pursued enhanced
metallurgical performance, at the expense of disregarding process economics. To overcome
these limitations, modern circuit design research has used numerical optimization to pursue
technical and economic objectives.
Mendez et al. found four approaches to circuit design in the literature. In general,
the circuit designers were tasked with identifying the operational characteristics of each
unit and the interconnection between the units which optimized some technical-economic
objective function. In the first two groups (labeled A and B in the review), an overall
circuit superstructure was first established. In the literature, the superstructure refers to all
possible combinations of circuit configurations. Typically, this superstructure is represented
mathematically by directing the products of each separation unit to a flow distribution
node. This node can ambiguously split the flow to any other point in the circuit. The
optimization routine is then tasked with calculating the proper split portions for these nodes.
In an example, the concentrate of a scavenger may be directed to a flow distribution node.
This node splits the concentrate to either return the rougher feed or proceed to the final
concentrate. The optimization routine then determines the appropriate split based on the
objective function. This node splitting paradigm is repeated throughout the entire circuit
so that all possible (or plausible) circuit configurations are contained in the superstructure.
Groups A and B of Mendez et al. utilize the superstructure approach with the designation
being Group A allows any value for the split portion, while Group B only allows integer
values. As described in the original research, the incorporation of only integer values marks
a substantial increase in the algorithm complexity.
Since the superstructure approach often leads to extremely non-conventional circuit
designs, other researchers have attempted more heuristic optimization approaches. Some of
these examples explain additive circuits which continually build up a better configuration
without necessarily optimizing the result. This class of techniques are labeled Group C
by Mendez et al.. Finally, Group D includes those researches which have utilized genetic
algorithms to produce optimal circuit solutions. Mendez et al. state that these papers show
the power of genetic algorithms but do not necessarily identify a global optimum.
53 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
Beyond the simple classification scheme Mendez et al. (2009) provides an exhaustive
analysis of the flotation models and additional design selections incorporated into the opti-
mizationalgorithms. Optionssuchasregrindmills,existenceofcolumncells,andexistenceof
feed splitting are compared for each group of techniques. Furthermore, the various objective
functions are listed and compared. Some examples include maximizing recovery, maximizing
grade, maximize quantity of valuable species in concentrate, maximize net smelter return,
maximize profit. These objectives have evolved over time, with recent trends incorporating
capital and operating costs, Nonetheless, many models today are limited by deterministic
projectionsofuncertainmarketfactors(i.e. mineralsellingprice). Theauthorsalsoconclude
that the lack of a comprehensive flotation model limits the state of global circuit optimiza-
tion since the results are largely driven by the models. Finally, a stronger effort needs to be
made in incorporating sustainability within the problem of circuit design.
Many of the papers described by Mendez et al. (2009) are included in the “higher-level”
design approaches shown in Figure 2.6. The remainder of this section will analyze these
papers individually, commenting on factors either omitted or generalized.
One of the early systemic uses of mathematical optimization to set operating param-
eters for a fixed circuit layout is presented by (Rong, 1992). Earlier work by the same
author had investigated a direct-search technique (Rong & Lyman, 1985), though the 1992
paper described the use of this technique within the framework of a coal preparation flow-
sheet simulator. The simulator is intended to predominantly serve the Chinese preparation
market, and therefore includes models for roll crushers, rotary breakers, jigs, dense-media
cyclones, as well as prepackaged flowsheets (not user-defined). The optimization engine uti-
lizes the Rosenbrock direct-search technique and can identify the optimal screen apertures,
cut densities, flotation time, and circuit layout to optimize a the objective function. This
technical-based value relates the simulated final ash with a specified final ash value. The
author does indicate the number of iterations required to achieve the optimum but does in-
dicate that the solution converges “rapidly even for the complex optimization tested” (Rong,
1992).
Yingling (1990, 1993a, 1993b) highlighted the need for robust mathematical optimiza-
tion in the circuit design problem, despite the nonlinear objective functions and discrete
selection variables which complicate the underlying mathematics. Yingling’s first paper in-
troduces a novel approach to the mathematical representation of the circuit configuration
based on the theory of steady-state evolution in Markov chains. This formalistic approach to
probabilistic separation was formed as an extension of Linear Circuit Analysis (see Section
2.3.3). Yingling notes the desire for an analytical circuit solution (especially in optimization
54 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
problems), but discredits the case-by-case algebraic approach taken by Meloy (1983a). In-
stead, Yingling proposes a flowgraph reduction strategy based on elementary reduction rules
(Yingling, 1988). With the Markov assumption, the separation state of a given unit is not
dependent on the prior states of the process. Combining this approach with potential theory
ofMarkovchains, Yinglingisabletoproduceamoreefficient, butmathematicallyequivalent,
solution for the steady-state behavior of the circuit. This approach incorporates the circuit
superstructure with flow distribution nodes. The state of this superstructure along with the
operational parameters is defined as the circuit control policy which is varied to optimize an
economically-driven reward function. The optimization algorithm proposed by the author is
based on stochastic dynamic programming with extended techniques to account for the mul-
tiple particle classes present in flotation systems. This optimization relies on discrete layout
alternatives, defined by the circuit designer; however, Yingling (1990) is regarded as one of
the first authors to provide a formalistic approach to the circuit superstructure concept and
an economic objective function.
Yingling’s later two-part series (1993a, 1993b) reviewed prior work in circuit optimiza-
tion and extended the original work in Markov chains. Yingling’s review categorized prior
work into two classifications: (1) those that use direct search techniques to optimize the
operational parameters and the circuit layout simultaneously and (2) those that use a two-
stageoptimizationtofirstestablishtheconfigurationbeforesolvingtheparameters. Yingling
notes that many of authors in the first group produce solutions that contain too many flow
streams, as the optimization algorithms blindly attempt to expand the circuit optimization
problem. The second group of authors rarely consider the impact of stream flows in the
circuit configuration step and generally ignore economic considerations. Yingling concludes
that neither approach is ultimately sufficient for the circuit design problem. In response,
the final paper (1993b) extends the procedures developed in the original (Yingling, 1990).
Most notably, a new optimization routine was developed which allows for both discreet and
continuous stream splitting nodes. This algorithm is stated to be more efficient and actu-
ally more robust than many direct search methods which cannot determine the appropriate
number of cells within a flotation bank. A similar ambiguous, though economically-based,
objective function is used. Examples of the solution robustness are presented.
Further economic factors were later integrated into the circuit optimization objective
function (Schena, Villeneuve, & Nol, 1996; Schena, Zanin, & Chiarandini, 1997). The initial
paperlargelybuildsuponYingling’sinclusionoffinancialrewardfunctions. Schenaetal.crit-
icize Yingling’s adherence to the linearity assumption originally proposed by Meloy (1983a).
Schenaetal.discussestheavailableflotationmodelsandthelackoflinearityinthesemodels.
The authors further propose the use of a direct-search technique to optimize the profit after
55 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
considering capital cost, operating cost, smelting cost, refining cost, and overall revenue.
Constraints may be placed on the minimum acceptable grade as well as other factors, and
the design vector includes the number of cells in the rougher and scavenger bank, as well as
the number of cleaning stages. Cell selection in the case of expanding an existing plant is
handled by weighting existing cells at no capital cost, while unavailable cells are weighted at
exorbitantly high capital costs. Other constraints are liberally applied to reduce the feasible
solution space and enhance the optimization efficiency. The first paper (Schena et al., 1996)
largely introduces these principles in general terms, while the second paper (Schena et al.,
1997) provides more pragmatic analysis. In the second paper, both flotation and grinding
models are included to create optimal circuit configuration from scratch without an initial
recommendation from the circuit designer. The algorithm handles nonlinearities by solving
linearized subproblems, and thus, has the capacity to design a full circuit from merely user
inputted feed and operational data. The authors note that the approach is unfortunately
limited by the fidelity of the process models.
Abu-Ali and Sabour (2003) further formalized the inclusion of economics in optimizing
portions of a flotation circuit by considering the simple case of adding cells to a flotation
bank. They conclude that the optimal bank size is determined when the incremental cost
of adding the cell is zero. The flotation recovery is determined by a simple perfectly-mixed,
in-series model which accounts for an infinite-time recovery. Equations are derived which
define the capital and operating costs for a bank of cells as a function of cell size and cell
number. Theassumptionsoftheanalysisconsiderthatthefeedrate, feedgrade, andrequired
grade are known. Furthermore, the mean residence time of the bank remains constant as
the bank size is increased (i.e. smaller cells are used as more units are added to the bank).
The flotation model and operational cost estimation equations are combined to calculate the
present value of the annual revenue as a function of the various operational, contractual,
and assumed parameters. This equation is added to the capital cost estimation to produce
a final expression for the net present value. Finally, the authors evaluate the derivative of
the new present value, and solve for the number of cells which causes the derivative to equal
zero. This value is denoted as the optimal solution. For the hypothetical low grade copper
example, the optimal bank size to achieve a target 80% recovery was evaluated to be 16 cells,
each having a volume of 24 cubic meters.
A novel optimization strategy, based on the McCabe-Thiele technique for multi-column
distillation, was presented by Hulbert (1995). The author introduces a new structure for
the modeling of counter-current flotation based on so-called “enrichment functions.” This
approach is comparable with the rate principles of flotation modeling, though it does not
directly consider rate constants and residence times which lead to nonlinear, numerical op-
56 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
timization. Rather, the recovery relationships are based on concentration, as similar to
chemical equilibrium processes. For the case of flotation, the concentration of mineral in
the concentrate is shown to be a function of the concentration in the feed and the rate of
removal or mass pull (which can then be related back to operational parameters such as
air flow for reagent dosage) The result is that an analytical optimum can be determined
for counter-current flotation system, by evaluating the partial derivate of the enrichment
function. McCabe-Thiele “staircase” diagrams can be determined for operating plants to
assist in interpreting the optimization. From the exercise, the authors define optimal per-
formance by the following heuristic: at the optimum, small changes in the mass pull of each
internal concentrate stream must not alter the concentration of any other stream nor alter
the concentration of the local pulp (Hulbert, 1995).
Finally,twofullycomprehensiveapproachestocircuitoptimizationhavebeenpresented.
The first uses mixed integer linear programming (MILP) to determine the circuit configu-
ration, bank vs. column selection, regrind selection, and operational parameters (Cisternas,
Ga´lvez, & Mendez, 2005; Cisternas, Mndez, Glvez, & Jorquera, 2006). The second uses the
elitist, binary-coded, non-dominated sorting genetic algorithm with the modified jumping
gene (NSGA-II-mJG) to simultaneously solve the configuration and the operational parame-
tersofaflotationsystem(Guria, Verma, Gupta,&Mehrotra, 2005; Guria, Varma, Mehrotra,
& Gupta, n.d.).
The first approach (Cisternas et al., 2005, 2006) uses a hierarchal superstructure to
determine the circuit configuration. The highest level (the separation task superstructure)
is composed of three subsystems: the feed processing superstructure, the tail processing
superstructure, and the concentrate processing superstructure. The separation tasks super-
structure controls the relative splits between the three components. For example, a flow
distribution node in the separation tasks superstructure controls the amount of the feed
processing superstructure’s tailings which proceed to final tailings versus the amount that
enters the the tailings processing superstructure. The individual subcomponents then have
similarly designed components consisting of individual bank cells, as well as an equipment
selection superstructure which decides upon potential regrind mills or column cells when
appropriate. A simple perfectly-mixed in series model is used to determine the bank cell
recovery, while an axially-dispersed reactor model is used for column cells. The objective
function is financially based, calculating the Net Smelter Return as a function of refining
charges, grade penalties, operating hours, feed rate, capital and operating cost for the equip-
ment, and revenue generated for the concentrate product. An application example is shown
to demonstrating that unlike prior superstructure-based optimization, this MILP model typ-
ically does not produce large-scale stream splitting or high numbers of individual streams.
57 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
A sensitivity analysis shows that the metal price is a significant factor in the optimization,
however, the valuable mineral mass distribution (i.e., grade) may be more significant. The
results imply that a specific circuit design may only be valid for a given mineral price and
feed condition.
The second comprehensive approach (Guria et al., 2005, n.d.) allows multiple objective
optimizationviaspecificgeneticoptimizationalgorithms(NSGA-II-mJG).Theauthorsshow
that four earlier examples in the literature used gradient-based or direct search techniques
which eventually converged to a local optimum. Conversely, the genetic algorithm described
in the paper produced superior circuit configurations while pursuing the same objective
function. The optimization routine accounts for standard assumptions in flotation modeling,
including perfectly-mixed reactors and rate constant distributions for particle species. The
objective function is defined by the profit of producing material at a certain grade, with a
penalty for values below the contract value. No equipment costs are considered. Constraints
are set on the total plant size, the loss of valuable mineral to the final tailings, the existence
of split streams, as well as other case-specific parameters. Specific details of the four example
problems are presented. The authors report the solution time for each problem which ranged
from 4.5 to 9 hours on standard desktop computers using 100,000 generations in the genetic
algorithm (Guria et al., 2005).
In the follow up paper (Guria et al., n.d.), the authors describe methodologies for
optimizing multi-objective functions using the same NSGA-II-mJG algorithm. The number
simultaneous objectives ranged from two to four, including maximizing recovery at a fixed
grade, minimizing number of streams, and minimizing the total cell volume. These examples
further demonstrate the robustness of the optimization technique in identifying potential
circuit designs which may be selected from the designer’s experience.
2.4 Summary and Conclusions
This paper has reviewed the methodologies for separation circuit design in the mineral
processing industry. Over the last century, mineral beneficiation has grown from a rudimen-
tary, laborious art to an efficient, highly mechanized industrial process. In this period of
growth, the froth flotation process has advanced as the most utilized and robust separation
process in the industry. Full understanding of the flotation process requires deep consider-
ation of the chemical and physical transport phenomena driving the various subprocesses.
The desire to understand and optimize the flotation process has led to a more fundamental,
rather than empirical, approach to process engineering. The consequences of this transition
58 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
have also led to various benefits in the optimization of all unit operations.
One engineering problem common to mineral processing is the design of the separation
circuit. Since all separation units are inherently imperfect, individual units are staged in an
attempt to produce synergistic efficiencies so that the final circuit product can meet contract
specifications. In order to design a process circuit, four questions must be addressed: (1) the
selection of the appropriate separation process(es); (2) the selection of the number and size
of individual units; (3) the selection of the various operational parameters for each unit; and
(4)theconfigurationoftheflowsbetweenunits. Whilemanycircuitdesignersapproachthese
questions sequentially, a more comprehensive methodology must realize the interdependence
of the various selections and answer these questions simultaneously.
Circuit designers have access to a number of process engineering tools which can aid
in the design process. Today, most circuit design is driven by computer simulation which
requires extensive information on the expected feed conditions, the operational details of the
equipment, the desired circuit layout, and the process models which relate all of these pa-
rameters to the quantity and quality of the final product. In this design approach, laboratory
data is typically collected and analyzed first.
Several approaches to process modeling have been used with success in the past. The
most common in the mineral processing industry is simple empirical modeling. In this ap-
proach, the process model is simply an arbitrary curve which best interprets the existing
data. The model does not inherently reflect the physics of the process; however, empirical
models are easy to develop and can be related to a number of operational parameters given
sufficient experimental data. One common fallacy is using empirical models to extrapolate
beyond the experimental range. Since the model has no knowledge of the physics, gross error
is common once the process transitions to a different operational condition. Phenomenologi-
cal models overcome many of these drawbacks while balancing utility and development time.
This modeling approach uses the physical subprocesses to define the functional forms, while
still using experimental data to determine the final model parameters. Phenomenological
models are more difficult to develop than empirical fits, but they are less prone to gross
error when extrapolation is required. The higher-order fundamental models extend this
concept using theory to completely define the model form and parameters. Unfortunately,
fundamental models are largely immature, given the complexity of most mineral separation
processes.
The literature defines several methodologies for optimizing the circuit configuration and
parameters. Before the original insurgence of modeling and simulation, most circuit config-
urations were designed from historic and legacy perspectives. This empirical evidence led to
59 |
Virginia Tech | CHAPTER 2. LITERATURE REVIEW
simple heuristics which imposed design rules, based on prior results. Modeling and simula-
tion led to more sophisticated and scientifically-based heuristics; but the final solutions are
strongly dependent on the applicability of the underlying assumptions and the robustness
of the process model. During this time, circuit analysis ascended as an alternative method-
ology which considered the fundamental capacity of the circuit itself, omitting the need for
a vetted process model. The ultimate adaptation to circuit analysis is realized in numeric
circuit optimization. Once again, process models must be known, but this approach allows
the sensitivity of the model to be analyzed with respect to the final solution. With the
widespread availability of high-performance desktop computers, numeric optimization has
become more accessible, and various, highly sophisticated optimization methods have been
developed exclusively for the circuit design problem.
From this review, four key opportunities for further research are:
1. The data analysis and simulation of separation circuits utilizes engineering tools which
are common to many other disciplines, most notably in the area of numeric methods.
Very little work has analyzed the effect of sensitivity or error propagation that these
methods inherently impose. Few authors have investigated the influence of uncertainty
on simulation. The breakdown between systemic uncertainty (i.e. from data fits, error
propagation) and natural uncertainty (feed variations) has not been discussed.
2. No consensus exists on the objective function utilized in the circuit optimization prob-
lem. While the objective has evolved over time from a purely technical value to a
financially-based optimum, researched have still not agreed on the best value to opti-
mize. Questions remain on whether on how operating costs, capital costs, the cost of
more complex circuits, and sustainability costs should be incorporated into an opti-
mization routine.
3. While many authors have expressed the utility of an analytical circuit solution, no
author has provided a simple, computer-based algorithm capable of producing one for
auser-definedcircuit. Therefore, theutilityoftheanalyticalsolutionisseverelylimited
by the inability to quickly produce solutions for alternate circuit designs.
4. Despite the availability of circuit optimization and analysis methods, none have gained
sufficient utilization in industrial circuit designs or modifications. This result is likely
due to the perceived complexity or the lack of applicability which accompanies the
current methods.
60 |
Virginia Tech | Chapter 3
Development of a Flotation Circuit
Simulator Based on Reactor Kinetics
(ABSTRACT)
A robust and user-friendly flotation simulation software package (FLoatSim) was de-
veloped to provide a numerical approach to flotation circuit design. This simulation soft-
ware incorporates a unique four-reactor modeling paradigm which considers rate-based pulp
recovery, non-selective froth recovery, partition-based entrainment recovery, and physical
carrying capacity limitations. Each of the four sub-models are defined by well-published
and industry-accepted principles. The final software package includes two data analysis and
parameter estimation modules which extract information from batch or continuous flow test-
ing. The resulting data is imported into the primary simulation program, which provides
flowsheet construction tools, unique calculation algorithms, and stream legend data visual-
ization. This chapter describes the modeling approach, simulation strategy, and software
user interface development. A final case study is presented and analyzed to demonstrate the
software’s applicability to a coal flotation scale-up problem.
3.1 Introduction
Currently, process modeling and circuit simulation are the most common engineering
tools used during the circuit design process. When well formulated and appropriately used,
models and simulations can predict ultimate circuit performance as a function of various
operational inputs. This capability supports a trial-and-error design approach, where the
69 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
circuit designer can propose a potential circuit solution (often from prior experience) and
then use the simulator to evaluate the final performance. If this performance is inadequate,
other potential solutions may then be proposed and simulated. While labor intensive, this
approach provides tangible performance criteria (i.e. circuit recovery, grade) by which the
circuit designer can base a final decision.
While often used analogously, the terms modeling and simulation distinctively refer
to two independent but related tasks. Modeling denotes the act of describing physical
processes via mathematical equations, while simulation signifies the act of solving the model
equations to predict future performance. The aptitude of a given process model is most
readily described by the model’s fidelity. In general, fidelity refers to the ability of a model
to successfully portray real physical systems. In mineral processing, the model fidelity is
often described as empirical, phenomenological, or theoretical, with higher fidelity reflecting
increased knowledge of the relevant physical subprocesses (See Chapter 2.2).
Alternatively, the aptitude of a simulation is driven by resolution. For mineral pro-
cessing simulations, resolution is analogously described as the level of data discretization.
Process models often relate separation performance to the physical properties of the system’s
particles. Since the actual properties of every particle include an infinite range of continuous
values, simulations often lump similar particles into a finite number of particle classes. The
model equations are then solved for each class of particle rather than for each particle inde-
pendently. This truncation introduces systemic error which is inversely proportional to the
resolution or number of particle classes. A greater number of particle classes will generally
produce a more realistic simulation, in the same way that a photograph with a higher num-
ber of pixels will produce a clearer image. Discretization is the decision of how these particle
classes may be formed while balancing the computational efficiency, data availability, and
systemic error.
This chapter describes the development of a robust froth flotation circuit simulation
software package (FLoatSim). The software includes a kinetics-based flotation model, suit-
able for scaling laboratory and plant data to full-scale user-defined circuits. This model uses
a novel four-reactor framework, while incorporating widely-published and industry-accepted
subprocess models. The software provides tools to optimize and scale these these models for
case-specific flotation systems through laboratory testing. This chapter describes the model
theory, simulation theory, and software interface unique to the FLoatSim simulator. The
approach is in the section is largely deductive. The holistic framework and global models are
described first, while the proceeding discussions focus on the constituent components and
sub-models.
70 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
3.2 Modeling Theory
3.2.1 Overall Recovery
The FLoatSim software uses a unique four-reactor flotation model framework was gen-
erated which combines industry-accepted rate models, partition models, and physical re-
strictions. The overriding assumption in this paradigm is that four predominant factors
contribute to flotation recovery: pulp recovery, froth recovery, entrainment, and carrying
capacity. In the FLoatSim model, these factors (with the exception of carrying capacity)
have been modeled independently. Small changes in the value of one factor do not directly
influence the value of the other two. Nevertheless, indirect influences may persist due to the
nature of the carrying capacity model (i.e. increased pulp recovery may cause the overall
recovery to exceed the carrying capacity limitation, which would, in-turn, cause a reduction
of froth recovery).
The interdependence of these four reactors is shown schematically in Figure 3.1. Ma-
terial recovered from the pulp reports to the froth and is then eligible for recovery to final
concentrate. Material not recovered in the froth is returned to the pulp feed and may be
recovered or rejected from the pulp. The pulp tailings reports to the entrainment reactor.
Material recovered via entrainment bypasses the froth stage and is eligible for direct recovery
to the final concentrate. Material rejected in the entrainment reactor reports to the final
tailings. All material recovered from the froth and entrainment reactors is finally subjected
to the carrying capacity restriction. This reactor imposes a maximum achievable concentrate
flow rate. Material recovered in excess of this restriction is returned to the flotation cell feed.
From a modeling perspective, the froth and pulp reactors are represented by rate models,
the entrainment reactor is represented by a partition model, and the carrying capacity is a
conditional restriction.
Using this serial arrangement of unit reactors, the analytical expression for recovery to
final concentrate (R ) is derived as a function of pulp recovery (R ), froth recovery (R ),
Final p f
and entrainment recovery (E):
R R (1−E)
f p
R = +E. (3.1)
Final
1−(1−R )R
f p
71 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
3.2.2 Carrying Capacity
In real flotation cells, physical limitations, such as the carrying capacity, may prevent
the flotation cell from achieving the recovery value calculated in Equation 3.1. Carrying
capacity (CC) is the maximum concentrate mass flow rate (i.e. tonnes per hour) and is
theoretically a function of the cell’s gas flow rate (Q ), the particle size (D ), the bubble size
g p
(D ), and a bubble-particle packing efficiency (β). When the expression is simplified, the
b
theoretical maximum carrying capacity is also function of the bubble surface area flux (S )
b
and the particle density (ρ):
4Q D β
g p
CC = = (2/3)S D ρβ. (3.2)
Theoretical b p
D
b
Pragmatically, other factors, such as the froth removal rate, total froth surface area, and
the cell’s weir lip length also factor into the maximum carrying capacity. In the FLoatSim
software, the carrying capacity is calculated from a user-specified unit carrying capacity
value (tonnes per hour of concentrate per square meter of froth area). This number is
highly application specific, given the effect of particle size and density on carrying capacity
(Equation 3.2). Empirical relationships or prior process knowledge define this value for a
given simulation.
Once the unit carrying capacity and the cell dimensions are defined, the total carrying
capacity (CC, given in tonnes per hour of concentrate) is calculated. This number is then
compared to the total mass flow of concentrate (R ∗Feed) for all flotation classes (i):
Final
N
(cid:88)
R ∗Feed ≤ CC (3.3)
Final,i i
i=1
If the normal cell recovery exceeds the carrying capacity limitation, the recovery must
be reduced until the restriction is met. This reduction is assumed to take place in the
froth. Namely, the froth recovery (described in Section 3.2.4) is incrementally reduced until
the carrying capacity restriction is met. The FLoatSim software uses a non-trivial matrix
application of Newton’s method to solve the froth recovery value which forces the total
recovery (summed from each particle class) to be equal to the carrying capacity restriction.
Since froth recovery is inherently non-selective, the reduction due to froth recovery is also
non-selective.
73 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
3.2.3 Pulp Recovery
Pulp recovery (often distinguished as true recovery) represents the fraction of mate-
rial which is transported from the pulp to the froth via bubble-particle attachment. As
described in Chapter 2.2.3, the recovery of particles in a flotation cell is generally accepted
to be a rate-based process and is modeled analogously to a chemical reaction. Traditional
flotation models use the plug-flow model to describe the batch cell and the perfectly-mixed
model to describe the industrial cell. However, recent trends have shown drastic increases
in the size of industrial flotation cells (Noble, 2012). Larger flotation cells tend to deviate
(sometimes catastrophically) from perfectly-mixed behavior, especially as the cell’s power
intensity (kW/m3) is reduced. Consequently, the perfectly-mixed assumption used in tradi-
tional flotation models may not be appropriate for contemporary large commercial flotation
cells.
To account for deviations from the perfectly-mixed assumptions, Levenspiel’s (1999)
axially dispersed reactor model for intermediate flows is utilized in the FLoatSim model.
This model uses the Peclet number (Pe) as an indicator of tank mixing. Residence time
studies are required to derive the Peclet number, and typical values for large conventional
cells range from 1 to 4 (smaller Peclet numbers indicate that a tank is more well-mixed).
Once the Peclet number is known for a given cell, the pulp recovery (R ) for a given mineral
p
class may be calculated from the cell residence time (τ) and the mineral’s kinetic coefficient
(k):
4Aexp{Pe/2}
R = 1−
p (1+A)2exp{(A/2)Pe}−(1−A)2exp{(−A/2)Pe}
(3.4)
(cid:113)
A = 1+4k τ/Pe.
p
TheFLoatSimflotationmodelutilizeslaboratorydatatopredictfull-scaleperformance.
Toaccountforchangesinthebubblesurfaceareaflux(S )betweenthetwoscales, thekinetic
b
coefficient determined from laboratory testing (k ) is scaled by a user-defined S ratio prior
lab b
to being used in Equation 3.4:
(cid:18) (cid:19)
S
b−FullScale
k = k . (3.5)
p lab
S
b−LabScale
Flotation residence time is determined by the calculated feed rate (Q ). This ap-
Feed
proach typically produces a conservative solution as opposed to using the flow rate of the
74 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
tailings. Theoverallcellvolume(V )isde-ratedtoaccountfortheuser-definedairholdup
Total
(ε):
V (1−ε)
Total
τ = . (3.6)
p
Q
Feed
3.2.4 Froth Recovery
Froth recovery (often inversely described as froth drop-back) is the portion of material
previously recovered from the pulp phase which ultimately survives the froth phase and is
recovered to the final concentrate. Many researchers have described froth recovery (R ) with
f
a plug-flow reactor model (e.g., Gorain, Harris, Franzidis, & Manlapig, 1998; Mathe, Harris,
O’Connor, & Franzidis, 1998; Yianatos, Bergh, & Cortes, 1998; Vera, Franzidis, & Manlapig,
1999; Vera et al., 2002; Yianatos, Moys, Contreras, & Villanueva, 2008):
R = exp(−k τ ) (3.7)
f DB f
where k is the rate of froth drop-back, and τ is the froth residence time. While most
DB f
researchers agree on the functional form, much debate has surrounded the calculation of k
DB
and τ .
f
Repeated experimental evidence has shown that k is the same for all mineral classes
DB
in a flotation system (Yianatos et al., 2008). Consequently, froth recovery is described as
a non-selective process. Simply, all minerals classes, regardless of hydrophobicity or pulp
recovery rate are expelled from the froth at the same rate.
Most contemporary flotation models use one of two methods to define froth residence
time. The first method describes froth residence time to be proportional to the superficial
gas rate and the froth height (τ = H/J ) (Gorain et al., 1998); whereas, the second method
f g
uses the ratio between the froth volume and volumetric flow of concentrate (τ = V /Q )
f f c
(Vera et al., 2002). While the latter option produces a better fit to experimental data,
it requires knowledge of the concentrate flow rate. For simulation purposes, this value is
difficult to predict without first knowing the froth recovery and the water recovery. While
these values are known in cell diagnostic studies, accurate simulation would require former
knowledge of the anticipated solution, thus eliminating the need for simulation altogether.
Alternatively, the former calculation (τ = H/J ) includes values which are known prior to
f g
simulation.
To allow different calculations of the froth recovery, the current FLoatSim model in-
cludes froth recovery as a direct input to the simulation. Nevertheless, to coincide with the
appropriate functional form, the inputted value is scaled according to the inputting S ratio,
b
75 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
which reflects the dependence of froth residence time on gas flow rate. Using the plug-flow
model, the S adjusted froth recovery rate (R ) is calculated:
b f,Adj
(cid:26) (cid:18) (cid:19)(cid:27)
1
R = exp (−k τ ) . (3.8)
f,Adj DB f
SBR
Since the original froth recovery is an input to the simulation, k and τ are not known
DB f
explicitly. Rather, the combined parameter (k τ ) may be calculated from the inputted
DB f
froth recovery (R ) by mathematically manipulating Equation 3.7:
f,Input
(k τ ) = ln[R ]. (3.9)
DB f f,Input
Bysubstituting, thecombinedvalueof(k τ )calculatedinEquation3.8intoEquation3.9,
DB f
the simplified calculation for R is produced. FLoatSim uses this equation to calculate
f,Adj
the ultimate froth recovery from the inputted values R and SBR:
f,Input
(cid:26) (cid:18) (cid:19)(cid:27)
1
R = exp ln[R ] . (3.10)
f,Adj f,Input
SBR
3.2.5 Entrainment and Water Recovery
Entrainment is a non-selective recovery mechanism whereby particles which are not
attached to air bubbles are carried into the concentrate by the flow of water. Given their
reduced inertial resistance, low density and fine particles have a much higher susceptibility
to entrainment. Recovery via entrainment (E) is known to be proportional to the recovery
of water (R ) and a degree of entrainment factor (DoE) (Vianna, 2011):
Water
E = R DoE. (3.11)
Water
In the FLoatSim simulator, the DoE factor is determined by size class from the labo-
ratory kinetics testing or a user-defined value may be specified. Given the aforementioned
theory, this factor is expected to decrease as particle size increases.
The water recovery is determined using a two-reactor model similar to the four-reactor
particle recovery model (Figure 3.1) with the omission of the entrainment reactor (i.e. water
cannot be “entrained” to the concentrate) and the carrying capacity restriction. Water
recovery from the two-reactor model may be calculated from the water pulp recovery (R )
p
and the water froth recovery (R ):
f
R R
f p
R = . (3.12)
Water
1−(1−R )R
f p
76 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
Equation 3.1 reduces to Equation 3.12 when E = 0. The water pulp recovery and water
froth recovery are calculated by the same methodology used for particle recovery (Equation
3.4 and Equation 3.9). The kinetic coefficient for water recovery may be determined from
a laboratory batch flotation test which tracks the mass recovery of water along with the
particle recovery.
3.3 Simulation Theory
3.3.1 Model Discretization
In order to solve the model equations as a circuit simulation, three aspects of the
simulation methodology must be established: the degree of model discretization, the model
parameters, and the calculation strategy. As mentioned above, discretization directly refers
to simulation resolution. The models presented in the prior section only apply to individual
particles with identical physical properties and kinetic coefficients. To solve these equations,
the particles must be grouped into a finite number of classes, with each class representing
a group of particles which behave similarly. The number and type of flotation classes must
balance the data limitations and the desired simulation accuracy. A larger number of classes
will produce a more realistic simulation; however, more extensive data must be acquired and
analyzed.
By default, the FLoatSim simulator incorporates three dimensions of discretization.
Each dimension correlates to a parameter which is known to influence flotation performance
and has values that can be easily identified in laboratory analysis. Each dimension has a
standard resolution limit within the FLoatSim software:
1. Particle Size. Size-by-size analysis of batch flotation data shows that particles of
different size classes generally float at different rates. This observation is especially
true for particles less than 10 microns and greater than 200 microns. Additionally,
small particles less than 10 microns will witness a significantly increased degree of
entrainment. FLoatSim allows up to 10 particle classes.
2. Mineral Type. In multicomponent flotation systems, particles of different mineral types
are known float at different rates. For example, in a three component system consisting
of chalcopyrite, molybdenite, and gangue, a different set of kinetic coefficients should
be determined for each of the three components. FLoatSim allows up to 4 valuable
mineral classes with an ever-present “other” gangue class.
77 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
3. Floatability Class. Particles of the same mineral type and size class still exhibit slight
variations in flotation rate due to numerous known and unknown factors (collector
adsorption, particle shape, degree of oxidation, etc.). To retain simplicity, all of these
factors are generally lumped into a single discretization class known as floatability.
FLoatSimallowsuptothreefloatabilityclasseswhicharegiventhegenericdesignations
fast-floating, slow-floating, and non-floating.
Each discretized element (e.g. 35 micron fast floating chalcopyrite) is characterized by
its mass percent of the total feed and a pulp kinetic coefficient. These values are determined
through the data fitting of the laboratory testing. Other means (e.g. QEMSEM) may be
applied but are not included in the default FLoatSim package. The grade of the discretized
element is determined by the mineral type, and the degree of entrainment is identical for all
classes of a similar particle size. The froth recovery is identical for all particles, thus invoking
the non-selective assumption for froth drop-back.
3.3.2 Model Fitting and Parameter Estimation
After the data discretization strategy has been identified, the kinetic coefficients and
mass proportions must be determined for the flotation system under inspection. These pa-
rameters are best estimated from batch kinetics tests conducted with feed material and
chemical dosages which most closely resemble the expected plant conditions. The batch
kinetics test with mass balanced size-by-size recovery, grade, and water recovery data as a
function of time may be used to establish all of the parameters needed for a plant simulation.
The software package includes a laboratory data fitting module (LabDataFitting) which es-
timates the kinetic and mass proportion parameters by weighted sum-of-the-squared-error
minimization between the experimental data and a plug-flow reactor model applicable for
batch systems. Recovery between the various classes is summed and used with the experi-
mental grade data to determine the mass proportions of the various mineral and floatability
classes.
Continuous flow data (full-scale, pilot-scale and locked-cycle testing) may be used ad-
ditionally or alternatively to batch kinetics data. The FLoatSim PlantDataFitting module
estimates the kinetic parameters from mass balanced plant data and equipment operating
conditions (cell volume, cell Peclet number, etc.). Similar to the lab data fit, this module
minimizes the weighted sum-of-the-squared errors between the experimental data and the
full-scale, four-reactor flotation cell model described in the prior section (Equation 3.1).
78 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
While full-scale and pilot-scale data minimize scaling uncertainty, they are often derived
from tests conducted at a single residence time or plant operating point. This lack of time-
dependent data decreases the amount of available information used to fit the models. As a
result, the validity of the model decreases rapidly as the simulations deviate from the tested
residence time (See Section 4. Furthermore, additional mineralogical data or assumptions
on the floatability class distribution must be invoked in order to properly determine the
mass proportions of the discretized elements. In the absence of this information, the data
fit may only be used to determine the kinetic coefficients for a single composite rate class.
Simulations conducted solely with these data sets must be carefully considered, given the
various sources of model uncertainty.
Material in a batch flotation cell often floats much quicker than similar material in an
industrial cell. While various scaling factors (energy dissipation, froth volume, and gas rate)
contribute to this difference, the simple difference in reactor type cannot be understated. As
shown in Figure 2.5, the plug-flow reactor shows considerably elevated recovery values within
the typical operating region of 2 to 6 kτ units. To further illustrate this point, Figure 3.2
shows recovery for batch and continuous flotation tests conducted under similar conditions
(Noble, 2012). In these test, all operational parameters (material type, energy intensity,
cell dimensions, froth height, and chemical dosage) were held constant, while only varying
the reactor type and superficial gas rate. The data from the batch test was used to fit a
plug-flowmodel, andthederivedkineticparameterswerethenusedtopredictthecontinuous
performance via a perfectly-mixed model (the energy intensity of the small cell was sufficient
to justify this assumption as opposed to an intermediate flow model). The results show good
agreement, and more importantly that in some cases, a five-fold increase in residence time
may be required to produce batch-derived recovery in a continuous cell (see residence time
required to achieve 70% recovery: 1.8 minutes in batch cell, 9.9 minutes in continuous cell).
3.3.3 Calculation Strategy
After the specific models have been built from experimental data, simulations may fi-
nally be conducted to determine how user-specified operational and equipment conditions
(i.e. feed rate, water addition, gas rate, circuit arrangement, equipment specifications) in-
fluence the plant’s final recovery and grade. The models presented in the preceding sections
are only applicable to a single cell. During simulation, these model calculations are extended
so that the predictions are applicable for a circuit of interconnected and interdependent unit
cells.
79 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
The calculation approach used in FLoatSim is sequential modular with iteration. The
simulation begins with the specified feed conditions and passes that information to the first
unit. The operational parameters and established models unique to that unit are used
to determine the recovery to concentrate and the rejection to tailings for each discretized
element. Those data are sent to the downstream units, and the calculations are repeated.
If recycle streams are present, the simulation iterates until a stable steady-state is reached.
After the first iteration, the recycle streams are determined and the flowsheet is reevaluated
with considerations from the updated values. This procedure is then repeated until a desired
threshold is established. An simple example of the sequential modular iteration algorithm
is shown in Figure 3.3.
The error associated with an iterative circuit solution is governed by the number of
iterations and the circuit complexity. For a given circuit, the simulation error is reduced
exponential by increasing the number of iterations. Furthermore, as the complexity of the
circuit increases, the number of iterations required to achieved a desired accuracy increases.
An example of this principle is shown in Figure 3.4. The circuits under consideration in this
example are simple counter-current cleaner configurations of a designated size (two to five
units). The concentrate from each unit passes serially to the next, while the tailings pass to
the prior unit. Final circuit concentrate is produced from the concentrate of the final cell,
while the final circuit tailings are produced from the tailings of the first cell.
3.4 Software Development and User Interface
3.4.1 Overall Simulation Work Flow
The FLoatSim software suite includes a graphic interface which permits user-defined
circuit configurations, the flotation models and simulation routines, as well as two supple-
mentary data fitting modules for laboratory and pilot-scale data analysis. All of the software
has been implemented as a subset to the Microsoft Excel platform. FLoatSim uses many of
Excel’s native functions and capabilities, while the models and graphical user interface have
been embedded using the Visual Basic for Applications (VBA) programming language. The
ubiquity of Excel’s interface minimizes user startup time and provides a number of familiar
analytical tools (i.e. plotting, data comparison, etc.), while minimizing development time
and new programming requirements. Furthermore, the VBA language easily allows the im-
plementation of new or user-defined models. VBA’s inherent simplicity extends this feature
to users with little or no programming experience.
81 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
Figure 3.5 shows the overall work-flow diagram describing the generic simulation ap-
proach utilized by the FLoatSim software. The start terminator segregates into three process
paths: one which analyzes and synthesizes the experimental data, one which defines the op-
erational parameters, and one which specifies the equipment. These three paths reunite to
define the flotation models immediately prior to the simulation. The simulation steps which
are enclosed in the dashed rectangles are part of FLoatSim’s standard analytical tools, ei-
ther by the data fitting modules (blue) or the simulation package (red). The data analysis
process path (the right side) is significantly more complex than the other two, given the
various data types and analysis steps required. This complexity gives rise to the data anal-
ysis modules which use FLoatSim’s model library to predict flotation rates from laboratory,
pilot, full-scale, or locked-cycle data. The FLoatSim suite also includes standard import and
export data features (depicted as green arrows in Figure 3.5). The data import function
retrieves kinetic and mass parameters from the laboratory fitting module, while the data
export function produces a summary of user-specified simulation outputs.
3.4.2 Data Fitting Software
TheFLoatSimsoftwaresuiteincludestwodatafittingmoduleswhichprovideastandard
methodology for data acquisition and analysis: RateFittingLab and RateFittingPlant. These
modules interface with the modeling and simulation routines to allow quick data import and
export. Since the procedures for laboratory kinetics testing are fairly standardized, the data
analysis for the RateFittingLab module benefit from a straightforward interface. Figure 3.6
shows the workspace for this module.
To ensure the most valid and scalable kinetic coefficients, the batch flotation test should
be carefully planned and conducted. The chemical conditions and feed material used in the
test must closely mimic the expectations of the full-scale plant. If a paddle is used to pull
froth from the cell, the froth pull rate should remain constant throughout the test, even
as the froth volume lessens in the latter stages. A steady pull rate may be verified by
analyzing the water recovery versus time plot. Since water is a single component, the results
should show that the same rate adequately predicts recovery throughout the entire test.
The identification of multiple water rates is usually an indication that the pull rate was not
constant throughout the test. Finally, if a paddle is used, only froth should be pulled by
the paddle. If the paddle pulls pulp along with the froth, the test data will overestimate
entrainment. As water is removed from the cell, fresh water must be added to maintain a
constant level. The amount of water added should be monitored and recorded to properly
determine the final water recovery.
84 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
The best data sets will track the water recovery and take advantage of size-by-size anal-
ysis. Since particle size is known to influence bubble-particle collision rates and entrainment
susceptibility, more particle size classes will inherently produce more accurate simulations.
The FLoatSim software has been designed to accommodate up to ten size classes. While
simulations can be conducted with just one size class, at least three (fine, medium, and
large) should properly account for entrainment effects in most flotation systems. The water
recovery data can be used to predict entrainment in the lab test as well as the final water
recovery for the plant.
Assays from experimental data usually indicate elemental assays (e.g. %Cu); however,
flotation behavior is largely driven by mineral components. Particulate chalcopyrite, rather
than elemental copper is recovered in a flotation cell. Mineralogical information, unique to
the flotation system under inspection, must be known in order to convert elemental assays to
mineral assays. FLoatSim has been designed to accept non-stoichiometric mineral formulas
(e.g. Fe2.5S3.7) as a means to account for multi-mineral, similar element systems. For exam-
ple, a flotation system may be known to contain three copper bearing minerals: chalcocite
(Cu S), chalcopyrite (CuFeS ), and cuprite (Cu O). The most accurate simulations would
2 2 2
track the flotation rate for each of these minerals separately. Unfortunately, such a simu-
lation would require mineralogical data for each time interval of the batch test in order to
distinguish the elemental copper assay into each of the constituent minerals. Conversely, if
the mineralogical distribution of the feed is known, the user may make a simplifying assump-
tion and lump all of these copper-bearing minerals into a single hypothetical copper mineral
that has non-stoichiometric element coefficients and floats at a single rate. Obviously, this
approach introduces a simplifying assumption with may reduce the simulation’s validity, but
few reliable and efficient alternative approaches exist, beyond time-dependent mineralogical
analysis.
The RateFittingPlant module is used to interpret and analyze data collected from con-
tinuous flow tests, including pilot plants, full-scale plants, and locked cycle tests. The data
from these tests are usually collected at a single residence time, unlike batch kinetics data
which is collected at a range of flotation times. Consequently, the derived rate data is only
valid for a narrow operating range around the tested residence time. Furthermore, the single
data point derived from continuous flow tests is not sufficient to meaningfully fit floatabil-
ity class distributions. Without introducing an assumed distribution, the RateFittingPlant
module can only fit rates for a single class (e.g. fast floating with no slow or non-floating
components).
Since the experimental procedure and circuit arrangements vary considerably between
87 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
Figure 3.7: FLoatSim RateFittingPlant workspace.
different continuous flow tests, the interface for this module is more open-ended and requires
more consideration from the user compared to the batch fitting routines. The workspace for
the RateFittingPlant module is shown in Figure 3.7.
Both modules use Excel’s Solver routine to perform the final parameter estimation
optimization problem. This routine determines the kinetic coefficients by minimizing the
weightedsumofthesquarederror(WSSQ)betweentheexperimentaldataandthepredicted
performance. Since Solver uses a gradient-based simplex search routine, the “optimized”
solution is susceptible to localized minima. To avoid this problem, FLoatSim ensures that
the best starting guesses (as predicted by the experimental data) are utilized.
3.4.3 Simulation Software
After the kinetic coefficients have been determined from laboratory analysis, the FLoat-
Sim simulation package may be used to predict the performance of various circuit configu-
rations and equipment specifications. The work flow for conducting a simulation is driven
by the custom ribbon tab icons. These icons and their respective descriptions are shown in
Figure 3.8 and Table 3.1.
The FLoatSim software contains a custom user interface which allows streamlined flow-
sheet generation, data entry, and solution visualization. Figure 3.9 shows the standard steps
in the FLoatSim simulation process. First, the user enters a custom flowsheet. Excel’s stan-
dard drawing tools are used to draw flotation cells, splitters, junctions, slurry streams, water
streams, and feed streams. These items may then be connected to form a user-specified
88 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
Table 3.1: Summary of FLoatSim Toolbar Buttons
Toolbar Button Action
Flotation Cell Places a flotation cell in the flowsheet drawing tab.
Junction Places a junction in the flowsheet drawing tab.
Splitter Places a splitter unit in the flowsheet drawing tab.
Feed Places a feed stream in the flowsheet drawing tab.
Water Places a water stream in the flowsheet drawing tab.
Stream Places a general stream in the flowsheet drawing tab.
Create Simulation Generates the circuit connection matrix and model
tabs for the current flowsheet configuration.
Get Feed Data Imports feed data from the lab data fitting module.
Get Rate Data Imports kinetic data for current flotation model tab.
Calculate Calculates flowsheet (resets iteration).
Carrying Capacity Applies carrying capacity restriction.
Add Stream Info Adds stream info boxes for all streams.
Delete Stream Info Deletes all stream info boxes.
Reroute Connections Reroutes stream box connections.
Back to Flowsheet Navigates back to the flowsheet tab.
Clear Flowsheet Deletes current simulation.
Export Data Exports user-defined simulation data.
Help Opens help menu.
89 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
(a) Add Units Toolbar
(b) Add Streams Toolbar (c) Actions Toolbar
(d) Stream Info Toolbar (e) Flowsheet Options Toolbar
Figure 3.8: FLoatSim Custom Ribbon Toolbars.
circuit configuration.
After the flowsheet is drawn, the initial conditions and simulation parameters are en-
tered. The FLoatSim simulator requires three main types of data to build the models and
conduct the simulation: equipment characteristics, kinetic coefficients, and operational char-
acteristics. The equipment characteristics (flotation cell size, froth surface area, and weir lip
length) are extracted from a user-defined equipment database. Other values, such as unit
Peclet number, air holdup and bank dimensions (number of parallel rows and cells in series)
are user-specified. Each flotation cell element drawn on the flowsheet may be used to repre-
sent a different cell type. The kinetic coefficients are manually entered or imported from the
data fitting modules. Finally, the operational parameters (feed rate and feed percent solids)
are entered manually onto the appropriate spreadsheet tab.
Once all the data and simulation parameters are input, the simulation may be calcu-
lated. FLoatSim uses Excel’s standard iterative calculation engine to resolve recirculating
loads. However, the FLoatSim calculation algorithm contains a hard zero-value reset to
ensure that the iteration does not produce a divergent or erroneous solution. The default
iteration convergence criteria is an absolute change of 0.001 of any spreadsheet value or 100
90 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
Table 3.2: Coal Case Study: Laboratory Data
Product Weight Assay (%)
(min) (%) Ash (dry) Combustible
0.25 17.00 5.91 94.09
0.50 16.60 6.52 93.48
1.00 10.20 10.81 89.19
2.00 11.10 12.45 87.55
3.00 7.80 17.59 82.41
5.00 5.00 22.54 77.46
Tail 32.30 76.89 23.11
Con Total 67.70 10.44 89.56
Calc. Head 100.00 31.91 68.09
iteration. Additional iterations may be requested by the user. Once the calculation is com-
plete, the user may analyze the results by custom stream legends or via FLoatSim’s data
export feature.
3.5 Case Study: Coal Flotation
3.5.1 Raw Data
To demonstrate the capability of the FLoatSim suite, a coal flotation scale-up simu-
lation study was conducted. Batch kinetics data was acquired for the circuit feed. The
mass balanced data report delivered by the metallurgical lab is included in Table 3.2. This
laboratory data is presented for the composite feed (no size-by-size analysis) and includes
assay information for ash and combustible matter.
3.5.2 Rate Fitting
This system was discretized using the two component assays: ash and coal. These
components represent the prominent distinctions of floatable and non-floatable particulate
matter in the feed material. Since no size data was recorded, the simulation utilized a single
92 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
Table 3.3: Coal Case Study: Kinetic Parameter Summary
Total Mass Coal Ash
Distributions (%)
Fast 17.12 93.97 6.03
Slow 50.68 87.20 12.80
Non 32.20 24.25 75.75
Rates (1/min)
Fast – 1.89 0.59
Slow – 1.00 0.52
Non – 0.00 0.00
3.5.3 Simulation
After collecting the laboratory data, performing the mass balance adjustments, and
determining the rate constants, the flotation models were constructed and the desired circuit
configuration was simulated. For this case study, the simulator was used to determine the
expected ash and yield from six 30 m3 cells in series, as well as the cumulative ash and yield
from each cell down the bank. The feed rate was set to 100 metric tonnes per hour at 5%
solids.
The 30 m3 cells have a standard froth area of 7.24 m2. Historical data shows that the
Peclet number for this unit is 2, and the unit carrying capacity for a fine coal application
is 1.4 tph/m2. No scaling is expected between the batch and full-scale S values, and the
b
air hold up is expected to be 15% (i.e. 85% effective volume). A froth recovery of 40% is
assumed.
Since no water recovery data was recorded in the batch test, a simplifying assumption
was made to estimate the water recovery in the simulation. In the laboratory analysis, the
batch test data reported a tailings water recovery between 4.5 and 4.0%. This value is a
reasonable estimation for cell-to-cell performance, and a feed percent solids in this range will
not be deleterious to downstream flotation. Consequently, the water recovery rate of each
cell was adjusted until the tailings percent solids was between 4 and 4.5%. Second, since no
information was available to justify a decision, a value of zero was assumed for the entrain-
ment partition. While this assumption deviates from reality, the non-zero rate constants for
the ash components already account for some gangue recovery, since entrainment was not
95 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
FloatCell FloatCell FloatCell FloatCell FloatCell FloatCell
F F F F F F
C C C C C C
T T T T T T
Figure 3.12: Coal case study simulation flowsheet.
used to fit the data.
After the assumptions and input data were resolved, the FLoatSim software was used
to conduct the simulation. The flowsheet drawing tools were used to construct six cells in
series, using a node after each cell to show the cumulative froth product. The final flowsheet
is shown in Figure 3.12.
After the flowsheet was constructed, feed data was entered into the appropriate cells of
the Feed Streams tab (Figure 3.13). The feed mass (100 tph) was entered, and Excel’s “goal
seek” command was used to determine the feed water required to attain 5% solids. The rest
of the sheet was completed using data from other laboratory analyses.
Next, the model tabs were generated by FLoatSim’s create simulation algorithm. An
individual tab was created for each of the six flotation cells and five junctions shown on
the flowsheet. Using the assumptions and input data described above, each model tab was
completed sequentially. The equipment database on each tab was adjusted to include the
desiredcellgeometry, andthegetratedatabuttonwasusedtoimportthekineticparameters
from the RateFittingLab module. The data entry field for the flotation cell tabs is shown in
Figure 3.14.
After all of the basic data was entered, the calculate button was pressed to initialize the
cell-by-cell modular calculations. At this point, neither the water recovery nor the carrying
capacitylimitationswereincludedinthecalculations. Whenthelaboratorydataissufficient,
the water recovery rate for each cell can be determined by fitting an experimental kinetic
coefficient. With the water recovery for each cell known, the carrying capacity button could
be used to implement the carrying capacity limitation for all cells simultaneous. Unfortu-
96 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
Table 3.4: Coal Case Study: Froth Recovery and Water Rate Values
Cell Water Rate Froth Recovery
Input (1/min) Input (%)
Float Cell 1 0.10 0.16
Float Cell 2 0.10 0.20
Float Cell 3 0.50 0.26
Float Cell 4 0.55 0.33
Float Cell 5 0.45 0.40
Float Cell 6 0.25 0.40
nately, the work flow for this simulation was altered to account for the unique assumption
used to determine the water recovery. As mentioned above, the water recovery was adjusted
until the tailings percent solids was between 4.0 and 4.5%. The tailings percent solids is
dependent upon the mass recovery which is dependent upon the status of the carrying ca-
pacity. Furthermore, the solid and water recovery of downstream cells is dependent upon the
performance of prior cells. As a result of these dependencies, the order of the adjustments
was logically considered.
First, the carrying capacity limitation for the first cell was imposed. The water recovery
from the first cell will not influence this value; however, the desired tailings percent solids
is influenced by this value. As a result, the overall solids recovery must be reconciled before
the water recovery. The carrying capacity for Cell 1 was implemented by overwriting the
standard froth recovery (0.4) with the value required to meet carrying capacity (in this
case, 0.16). Next, the water recovery was adjusted until the desired value was reached.
This procedure was then repeated cell-by-cell, down the bank. Table 3.4 summarizes the
final froth reduction values and water rate values for each cell which satisfy the original
assumptions.
After all values were input, the the final simulation was calculated. To analyze the
results, stream info boxes were added to the flowsheet, showing the distribution and grades
of various components for each stream (Figure 3.15). Finally, the export data button was
used to conduct further analysis on the values produced from the simulation. This post-
processing shows the percent yield and percent ash as a function of residence time down the
bank (Table 3.5).
98 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
3.5.4 Discussion
The results of the case study simulation demonstrate the simulator’s capability and
highlight some of the fundamental differences between batch and continuous reactor kinet-
ics. As described in Section 3.3.2, batch data from the laboratory is fitted by a plug-flow
reactor model, while the plant data is projected using an axially-dispersed reactor model.
Whilethismeredifferenceinreactortypepromotessomedeviationbetweentheexperimental
and simulated data, the implementation of a carrying capacity restriction in the simulation
further propagates distinction.
Cumulative yield and cumulative ash for the experimental and simulated data is pre-
sented as a function of flotation time in Figure 3.16. Along with the experimental data and
the standard simulation (which includes carrying capacity restrictions), a third data series
is plotted showing the simulation results assuming the carrying capacity restriction was ig-
nored (labeled the “Kinetic Only” data series, since recovery in this simulation is driven
entirely by kinetics). This data is included to isolate the difference between the plug-flow
and axially-dispersed reactor models as well as the true influence of the cell carrying ca-
pacity. These three curves together indicate that a simple cells-in-series plant will never
outperform the batch cell in terms of yield at a given residence time. This phenomenon
is largely driven by the difference in reactor models. In theory, as more cells are added in
series, the axially-dispersed reactor can approach the plug-flow behavior; however, moderate
deviation is expected when only six cells are utilized. The magnitude of this difference is
quantified by comparing the batch data and the kinetic only curves in Figure 3.16.
Alternately, thedifferencebetweenthekineticonlydataseriesandthecarryingcapacity
limited data series is driven by the imposed restriction in concentrate flow rate. The cell
geometry and metallurgical conditions in this case study, dictate that the concentrate weir
has a maximum flow capacity of 10.14 tph, regardless of the kinetic prediction. This physical
restraint can substantially reduce the expected yield at a given residence time. For example
at a four minutes of residence time, kinetic simulation dictates that the yield should be 65%;
however, the available froth surface area in the plant is not capable of physically producing
this amount of concentrate. According to the carrying capacity limited simulation, the
anticipated yield will instead be 45% at that residence time. For the case study simulation,
the first four cells in the bank were all restricted by carrying capacity, while the last two
were the only cells restricted by kinetics.
ThecumulativeashplotinFigure3.16showsthatthereductioninyieldofthesimulated
plant is compensated by in increase in product quality. At a given residence time, the
100 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
70
60
50
40
30
20
10
5 6 7 8 9 10 11
Cumulative Ash (%)
)%(
dleiY
evitalumuC
90
80
70
60
50
40
30
Batch Data 20
Simulation (CC Limited)
Simulation (Kinetic Only)
10
75 80 85 90 95 100
Ash Rejection (%)
)%(
yrevoceR
laoC
Figure 3.17: Separation efficiency plots for experimental and simulated values. The batch
data series shows the experimental data gathered from bench-scale laboratory testing. The
carrying capacity (CC) limited data series corresponds to the case study simulation which
includedrealisticcarryingcapacityrestrictions, whilethekineticonlydataseriescorresponds
to a purely kinetic simulation which ignores carrying capacity limitations.
simulation shows substantially reduced product ash when compared to the same residence
time in the batch case. In both plots, the carrying capacity limited simulation shows a
strongdeviationfromthestandardkineticcurve. Ratherthanthetypicalrate-basedrecovery
curves, thecarryingcapacitylimitedcurveshowsmorelinearbehaviorforthecellsinfluenced
by carrying capacity.
Given the balance of reduced yield but increase product quality, the experimental data
and the simulated data are roughly equivalent in terms of separation efficiency. Figure 3.17
shows cumulative yield plotted against cumulative ash as well as carbon recovery plotted
againstashrejection. Bothoftheseplotsarecommonlyusedincoalpreparationasindicators
of separation efficiency. In the yield-ash curve, points approaching the northwest corner(high
yield, lowash)representthegreatestseparationefficiencies, whilethehighashrejection, high
carbonrecoverypoints(northeastcorner)aredesiredinthelattergraph. Sincethesamefeed
characteristic were used in both cases, either curve is capable of producing a fair comparison.
Typically, aplug-flowreactorshouldbemoreselectivethananaxially-dispersedreactor.
102 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
However, the case study simulation shows that both cases are quite similar, and either is
capable of producing a greater efficiency at different points on the curve. Moving left to right
along the cumulative yield - cumulative ash curve (or right to left along the carbon recovery
- ash rejection curve), the carrying capacity simulated curve shows the best efficiency at the
low product ash (or high ash rejection) points. These points correspond to the low residence
time values in the data sets. Alternatively, along the midpoints, the batch data shows the
greatest efficiency, while the carrying capacity limited curve regains the optimal position at
the high product ash (or low ash rejection).
This deviation from the reactor-theory expectations is explained by the inclusion of a
froth drop-back model. If the simulator only included a pulp recovery model, the batch
data curve would always outperform the simulator curve. However, the froth drop-back
generates a refluxing action. Material that is rejected from the froth returns to the pulp
and has an opportunity to re-float. While the froth drop-back model is non-selective, the
inclusion of froth reflux increases the selectivity of the entire process, by re-exposing rejected
particlestotheselectivepulpreactor. Thedegreeoftheselectivityincreaseisdirectlyrelated
to the magnitude of the froth drop-back. For this simulation, the froth drop-back in the
carrying capacity limited cases was extremely high, sometimes as great as 84% (Table 3.4).
The resulting balance between the selectivity enhancing froth drop-back and the selectivity
decreasing axially-dispersed reactor model causes the simulation curves to “intertwine” with
the batch data curves.
The selectivity-enhancing phenomenon associated by froth drop-back is further demon-
strated by comparison of the kinetic only and carrying capacity limited simulation data.
While the magnitude of difference is extremely low, the carrying capacity limited simulation
always exhibits a higher separation efficiency than the kinetic only curve. This difference
is most evident in the low residence time points (low cumulative ash, high ash rejection),
and it diminishes as the residence time increases. The greatest difference in froth drop-back
between the two simulations is at the low residence times, where the carrying capacity lim-
itation is most pronounced. The increased froth drop-back at these points causes a higher
degree of reflux and thus a greater separation efficiency. At the higher residence time points
(wherethecarryingcapacitylimitedsimulationisactuallydrivenbykinetics), theseparation
efficiency of the two simulations is identical.
From a practical standpoint, the separation efficiencies in all three cases are roughly
equivalent. The predominant difference between the carrying capacity limited simulation,
the kinetic only simulation, and the batch data is the residence time required to achieve a
desiredyield. Theextremelyhighrefluxinginthefirstcarryingcapacityconstrainedcellmay
103 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
lead to enhanced separation performance, but this difference is quickly reduced as further
productsareaddeddownthebank. Thissimulationindicatesthatoperationalenhancements
whichcanmitigate carrying capacity restrictionswillallow substantial reductions inrequired
cell volume.
3.6 Summary and Conclusions
This section has described the the FLoatSim software suite. The flotation modeling
theory is derived from a unique four-reactor model which independently considers pulp re-
covery, froth recovery, entrainment recovery, and carrying capacity. The pulp recovery model
is based on intermediate flow conditions in an axially-dispersed reactor. As a result, pulp
recovery is a function of the particles’ kinetic coefficients, the cell’s residence time, and the
cell’s degree of mixing (or Peclet number). The froth recovery model is user-specified but
derives from a plug-flow model influenced by the gas residence time in the froth and a rate
of froth drop-back. The entrainment model shows that entrainment recovery is proportional
to the water recovery and a size-dependent degree of entrainment fitting parameter. Finally,
the carrying capacity imposes a strict limit on the maximum concentrate flow rate which
may be produced per unit of cell surface area. In cases that exceed the carrying capacity
restriction, the cell’s froth recovery is incrementally reduced until the physical constraints
are met.
Data from laboratory, pilot-scale, or full-scale testing is used to determine the unique
fitting parameters to the general discretized models. FLoatSim’s data fitting modules adjust
the model parameters (kinetic coefficients and mass proportions) to minimize the weighted-
sum-of-the-squared errors between the experimental data and the model predictions for the
experimental condition under investigation. This data analysis approach uses a three-level
discretization which can include up to ten size classes, five mineral classes, and three floata-
bility classes.
FLoatSim’s sequential modular calculation approach extends the single cell models to a
user-specifiedplantconfiguration. Equipmentcharacteristics(cellsize, frothdimensions, and
unit Peclet number) as well as operational conditions (feed rates, water addition rates, and
gas rates) are used with the model parameters to ultimately predict the plant performance
characteristics, including grade, recovery, and residence time. These parameters may then
be adjusted, and subsequent simulation can indicate optimal performance strategies.
The overall work-flow has been demonstrated for a coal flotation case study. This
104 |
Virginia Tech | CHAPTER 3. DEVELOPMENT OF A FLOTATION CIRCUIT SIMULATOR BASED
ON REACTOR KINETICS
exercise shows how the simulation package may be used to analyze batch data and predict
performance, in this case, for a simple rougher bank. Post-processing of the simulated
data demonstrates how the difference in the reactor model as well as the carrying capacity
limitations explain the significant deviations between the laboratory test results and the
projected full-scale performance.
Acknowledgments
The author would like to thank Dr. Serhat Keles for his initiative in designing the
user interface and providing ideas and suggestions for general software usability. Financial
support for the FLoatSim Software package was provided by FLSmidth Minerals.
3.7 Bibliography
Gorain, B., Harris, M., Franzidis, J., & Manlapig, E. (1998). The effect of froth residence
time on the kinetics of flotation. Minerals Engineering, 11(7), 627–638.
Levenspiel, O. (1999). Chemical reaction engineering. Wiley.
Mathe, Z., Harris, M., O’Connor, C., & Franzidis, J. (1998). Review of froth modelling in
steady state flotation systems. Minerals Engineering, 11(5), 397–421.
Noble, A. (2012). Laboratory-scale analysis of energy-efficient froth flotation rotor design.
Unpublished master’s thesis, Virginia Polytechnic Institute and State University.
Vera, M., Franzidis, J., & Manlapig, E. (1999). Simultaneous determination of collection
zone rate constant and froth zone recovery in a mechanical flotation environment. Minerals
Engineering, 12(10), 1163–1176.
Vera, M., Mathe, Z., Franzidis, J., Harris, M., Manlapig, E., & O’Connor, C. (2002). The
modelling of froth zone recovery in batch and continuously operated laboratory flotation
cells. International Journal of Mineral Processing, 64(2), 135–151.
Vianna, S. (2011). The effect of particle size, collector coverage and liberation on the
floatability of galena particles in an ore. Unpublished doctoral dissertation, The University
of Queensland.
105 |
Virginia Tech | Chapter 4
Derivation of Rate Constant
Compositing Formulas
(ABSTRACT)
Several mineral processing unit operations are described by first-order kinetic reactor
models. Nearly all contemporary froth flotation models incorporate one or more kinetic
models to describe various sub-processes, including pulp recovery and froth recovery. Fur-
thermore, contemporary approaches utilizea “lumpedparameter” modelwhichdescribes the
bulk flotation behavior as the sum of various components (fast, slow, and non-floating). In
order to express a distribution of rate constants as a single apparent rate, the values must be
composited. Unlike other physical properties, rate constants cannot be easily combined by
simple mass or volume weighted averages. This chapter describes the derivation and appli-
cation of more sophisticated reactor-dependent rate constant compositing formulas. These
formulas are shown to be time dependent, as the time in which the rates are composited
influences the apparent bulk rate. Sample calculations are shown for the various formulas
and two applications of this theory are presented, explaining the role of compositing in the
observable rate limits and simulation discretization error.
4.1 Introduction
Kinetic models are often used in mineral processing to describe unit operations which
have a strong time dependency. The most common example of this modeling approach is
givenbyflotation(Sutherland,1948; Tomlinson&Fleming,1965; Lynch,Johnson,Manlapig,
107 |
Virginia Tech | CHAPTER 4. DERIVATION OF RATE CONSTANT COMPOSITING FORMULAS
& Thorne, 1981; Fichera & Chudacek, 1992), though other metallurgical processes, such as
grinding (Lynch & Bush, 1977), pelletization (Fuerstenau, Kapur, & Mitra, 1982), and
leaching (Beolchini, Papini, Toro, Trifoni, & Vegli`o, 2001; Mellado, Cisternas, & Ga´lvez,
2009) have also be modeled as kinetic reactors.
Despite the range of potential applications and physical environments, performance in
a kinetic reactor is defined in terms of the reactor type, the mean particle residence time
(τ), and a kinetic coefficient (k). For a plug-flow reactor, the kinetic recovery is given by:
R = 1−e−kτ. (4.1)
plug
The perfectly-mixed model is given by:
kτ
R = . (4.2)
mixed
1+kτ
Finally, the axially-dispersed model is given by:
4Aexp{Pe/2}
R = 1−
ADR (1+A)2exp{(A/2)Pe}−(1−A)2exp{(−A/2)Pe}
(4.3)
(cid:112)
A = 1+4kτ/Pe.
where the degree of axially mixing is given by the Peclet number (Pe) (Levenspiel, 1999).
In each of these models, τ represents the mean residence time of reactive (or in the
case of flotation, floatable) particles which exhibit a reaction (or flotation) rate of k. Often,
these two factors are combined to form the dimensionless kτ factor. In the case of flotation,
the kinetic coefficient is modeled to be an intrinsic physical property of the material which
has a defined value for a given experimental condition and particle properties. For example,
experiments show that particles of similar composition but different sizes float at different
rates(Gaudin, SchuhmannJr,&Schlechten, 1942). Thesedisparitiescausemanyresearchers
to use a distributed parameter rate model, with the the distribution classes referencing the
various driving forces of rate constant disparity. For example, most flotation models at least
include mineral and size classes, reflecting the knowledge that particles of different mineral
types and of different size classes float at different rates (Fichera & Chudacek, 1992).
Despite this distributed parameter approach, researchers have shown that even par-
ticles of similar size and composition still exhibit a distribution of rate constants (Polat &
Chander, 2000). Consequently, additional factors beyond size and mineral composition drive
changes in the rate constant. Some of these factors may include physical or hydrodynamic
108 |
Virginia Tech | CHAPTER 4. DERIVATION OF RATE CONSTANT COMPOSITING FORMULAS
properties such as particle shape, particle zeta potential, particle contact angle, degree of
surface oxidation, bubble-particle collision turbulence, the kinetic energy of detachment, or
the film thinning rate (Sutherland, 1948; Sherrell, 2004; Do, 2010; Kelley, Noble, Luttrell, &
Yoon, 2012). Toaccountforthesevariousill-definedandpoorly-understoodcharacteristics, a
general approach to model parameterization is often used (Imaizumi & Inoue, 1965). In this
approach, the models lump together all of these combined effects to form a loosely-defined
“floatability class.” In most flotation systems, the full distribution of floatability classes is
truncated to three colloquial distinctions: fast, slow, and non-floating. Most contemporary
flotation models include some form of distributed flotation classes, often via double or triple
distributed models which include size, composition, and floatability (Fichera & Chudacek,
1992, also see Chapter 3.3.1).
In general, when a continuous distribution of values is truncated to a finite number
of distribution classes, some of the information is lost and error is introduced. As the
distribution is truncated to fewer classes, the magnitude of the potential error increase.
Historically, the standard use of two or three floatability classes limits the degree of potential
error while providing meaningful values which can be estimated from the available data set.
Mathematically, the extent of the original data set defines the number of potential classes
whichcanbeestimated. Occasionally,thelackofanextensivedatasetorthedesiretomakea
single point comparison leads practitioners to estimate the full distribution of rate constants
with a single rate constant that produces the same result. This approach is commonly
required when recovery data has only been collected at a single residence time.
Formostphysicalproperties,thecalculationrequiredtotruncateadistributionofvalues
to a single value is trivial; however, the resultant value is often quite useful, despite the loss
of information. The single truncated value provides a simple means to compare two varying
distributions. Furthermore, the truncated value can be used to predict the average behavior
that the distribution will exhibit. As a common example, the mass mean particle size may
be used to truncate a full distribution of particle sizes to a single value. Similarly, an average
density may be determined to represent the apparent density that a particle composed of
many component densities will exhibit. In both of these cases, the calculation only entails a
simple weighted average.
Unfortunately, the mathematical nature of rate constants do not lend themselves to
a simple compositing expression. For example, consider a two component system which
contains 500 kg of material with a rate constant 1.4 min−1 and 1,500 kg of material with a
rate constant of 0.2 min−1. The composited rate constant for this system should be a single
value which produces the same recovery as the two component system when utilized in the
109 |
Virginia Tech | CHAPTER 4. DERIVATION OF RATE CONSTANT COMPOSITING FORMULAS
reactor model. A simple weighted average shows that the combined system should exhibit
a rate constant of 0.5 min−1 ([500 × 1.4 + 1500 × 0.2]/[500 + 1500] = 0.5). However, the
recovery calculations for a batch reactor (at a residence time of 2 minutes, for example) do
not support this approach:
?
R = R
Composited Distributed
M (1−e−k∗τ) =? M (1−e−k1τ)+M (1−e−k2τ)
T 1 2
(2000)(1−e−(0.5)(2)) =? (500)(1−e−(1.4)(2))+(1500)(1−e−(0.2)(2))
?
(2000)(0.632) = (500)(0.939)+(1500)(0.330)
?
1264 = 470+495
1264 (cid:54)= 965
Several observations surface from this example. First, simple weighted averages are not
suitable for rate compositing estimation. Second, this example subtly shows that the true
composited rate must consider both the residence time and the reactor type, since these
values influence the equations used to determine recovery from a kinetic coefficient. Finally,
the math involved in this example establishes the framework for the derivation.
The remainder of this paper will work through the derivation and implications of ac-
curate rate compositing formulas. Expressions will be derived for the plug-flow, perfectly-
mixed, and axially-dispersed reactor models. Sample calculations are shown to demonstrate
the utilization and verification of the derived expression. Finally, composite optima and dis-
cretization error are presented as two practical applications of this rate compositing theory.
4.2 Derivation
In order to derive a general expression for a composite rate constant, several precise
definitions must first be established. The composite rate constant (k∗) for a set of data is
defined as the single rate constant which yields a recovery value (R∗) identical to the sum of
all component rate constants (k ), with each component having a known mass value (M ).
i i
From the example presented in the previous section, unique expressions for the composite
rate constant must be derived for each of the three reactor types. Also, the expressions must
have a time dependence.
110 |
Virginia Tech | CHAPTER 4. DERIVATION OF RATE CONSTANT COMPOSITING FORMULAS
Mathematically, R∗ isdefinedastheweightedaverageofthecomponentrecoveryvalues:
R M +R M +···+R M
(cid:80)N
R M
R∗ = 1 1 2 2 N N = i=1 i i (4.4)
M +M +···+M (cid:80)N M
1 2 N i=1 i
where R is the recovery of particle class i, M is the mass fraction of particle class i, and
i i
N is the total number of particle classes.
In order to derive the composite rate constant from the constituent rate constants, the
appropriate reactor-dependent recovery equation is substituted for R in Equation 4.4, and
by mathematical manipulation, the composite rate constant is solved in terms of the class
rate constants (k ), the class mass fractions (M ), and the test residence time (τ).
i i
For a plug-flow reactor, Equation 4.1 is substituted into Equation 4.4 and solved for
k∗ . The final relationship is given by:
plug
(cid:32) (cid:34) (cid:35)(cid:33)
(cid:80)N M e−kiτ
k∗ = −ln i=1 i τ−1. (4.5)
plug (cid:80)N
M
i=1 i
This equation indicates that the apparent rate constant is dependent on the residence
timeinwhichthecompositingtakesplace. Inthecaseofexperimentaldata, thiscompositing
time is simply the residence time in which the test data was acquired.
A similar mathematical approach is extended to account for the other reactor types. By
substituting Equation 4.2 into Equation 4.4, the apparent rate constant for a perfectly mixed
reactor (k∗ ) may be derived in a similar manner as Equation 4.5. This final relationship
mixed
is given by:
(cid:104) (cid:105)(cid:104) (cid:105)
(cid:80)N
M
(cid:81)N
(1+k τ)
i=1 i i=1 i
k m∗ ixed = (cid:80)N (cid:20) Mi[(cid:81)N j=1(1+kjτ)](cid:21) −1 τ−1. (4.6)
i=1 (1+kiτ)
Given the complexity of the axially-dispersed reactor equation, an explicit analytical
expression for k∗ is not possible. Alternatively, Newton’s method may be used to solve the
system of equations numerically. The formulation of Newton’s method requires the equation
in question to be set equal to zero so that the roots may be determined. This derivative
of this function with respect to the variable in question (in this case, k∗ ) must also be
ADR
known. For the axially-dispersed reactor model, the Newton’s method formulation is given
111 |
Virginia Tech | CHAPTER 4. DERIVATION OF RATE CONSTANT COMPOSITING FORMULAS
Table 4.1: Kinetic Data Used for Rate Compositing Examples
Floatability Mass Grade Rate
Class (%) (% CuFeS ) (1/min)
2
Fast 3 60 1.20
Slow 9 30 0.40
Non 88 1 0.01
TOTAL 100 5.38 –
Equations 4.9 - 4.16 are finally combined to define the full partial derivative of the
axially dispersed reactor model with respect to k:
∂R −(R +R )∂Rnum −R (cid:2)(cid:0)∂R denA(cid:1) + ∂R denB(cid:3)
ADR = denA denB ∂k num ∂k ∂k (4.17)
∂k (R +R )2
denA denB
ThispartialderivativeisthensubstitutedintoEquation4.8toformthefinalformulation
ofNewton’smethod. Equations4.7and4.8maythenbesolvediterativelytodeterminek∗
ADR
for a given set of data.
4.3 Sample Rate Compositing Calculations
To illustrate the usage of the compositing equations, a series of sample calculations
is provided. These examples show how a simple three floatability class flotation system
can be truncated to a single equivalent rate constant using the aforementioned expressions.
The data used for these examples is a chalcopyrite flotation system with the kinetic data
prescribed in Table 4.1.
To calculate the composite rate of chalcopyrite (CuFeS ) at a residence time of 6 min-
2
utes, the relative “units” of chalcopyrite (M ) in the three rate classes must first be deter-
i
mined. These values are calculated by simply multiplying the mass fractions in the floata-
bility class by the chalcopyrite grade for that those classes. The resulting values are 180,
270, and 88 for the fast, slow, and non-floating classes, respectively. Once the units, rate
constants, and test residence time are known, Equation 4.5 may be used to determine the
113 |
Virginia Tech | CHAPTER 4. DERIVATION OF RATE CONSTANT COMPOSITING FORMULAS
Table 4.2: Theoretical Observable Optima For Rate Constant Composites
Minimum Maximum
Time τ = ∞ τ = 0
Plug-Flow min(k )
i
(cid:32) (cid:33)
Perfectly Mixed [(cid:80)N i=1Mi][(cid:81)N i=1ki] (cid:80)N i=1kiMi
(cid:80)n (cid:20) Mi(cid:81)j=1N(kj)(cid:21) (cid:80)N i=1Mi
i=1 ki
Axially Dispersed Pe Dependent
while the ADR curve is typically bounded by the two. All reactor models converge to the
same maximum composite, while the minimum composite is reactor-dependent with the
perfectly mixed composite always being lower than the plug-flow composite. As shown in
in Table 4.2, the maximum composite is defined as the weighted average of the component
rates, while the minimum for the plug-flow case is simply the minimum value for all rates.
4.4.2 Discussion
Though the compositing formulas are grounded in abstract theory, several practical
implications of this curve can be derived. The sample data set used to build this curve
includes reasonable values for most flotation systems. The rate composite transition curve
(Figure 4.1) shows that the steepest transition for all reactor types occurs between residence
times of 1 minute and 15 minutes. Unfortunately, most typical flotation cells operate within
this range. As a result, small deviations in residence time at the test condition will lead
to proportionally large changes in the apparent rate constant. To fully account for uncer-
tainty, projections and simulations from this apparent rate constant must consider this steep
transition.
With respect to the rate constant measurements, the only component rate constant
that can be directly derived from the test data is the slow rate constant in the plug-flow
reactor. This measurement requires recovery information at relatively long residence times.
While Figure 4.1 shows the close approach to the asymptote occurring at 90 to 100 minutes
of residence time, this value is strongly dependent on the specific component data. For the
perfectly-mixed reactor, the apparent rate constant is always influenced by remnant fast
floating material. Consequently, the apparent slow rate at very long residence times will
always over-predict the true rate of the slow-floating component.
117 |
Virginia Tech | CHAPTER 4. DERIVATION OF RATE CONSTANT COMPOSITING FORMULAS
Predictions for the fast floating-floating rate have a similar limitation. Even at in-
finitesimally small residence times, the apparent rate constant is influenced by the presence
of slow-floating material. Without the information on the full distribution, the true fast
floating rate cannot be directly calculated with data from any reactor type. For example,
the data used to derive Figure 4.1 indicates that the true fast rate is 1.2; however, the great-
est directly measurable rate constant is only 0.604. Furthermore, at practically measurable
residence times (30 seconds to 1 minute), the apparent rate constant is already within the
transition phase. To better illustrate the practical implications of rate compositing, the rate
transition curve is reproduced in Figure 4.2 with a linear time axis and a practical range of
measurable residence time values.
4.5 Discretization Error
4.5.1 Application
Asasecondapplicationoftheratecompositingtheory,theapparentrateequationsallow
the determination of discretization error in simulations. When data is gathered from pilot
or full-scale testing, the recovery data is often not time-dependent. As a result, only a single
rate constant can be determined, rather than the real distribution of rate constants which
were combined to form the apparent rate. By definition, the measured rate is the composite
rate as determined from Equations 4.5, 4.6, and 4.7. Future projections or simulations which
usethis ratewilldeviatefrom therealbehavioras thesimulatedresidencetimedeviates from
the residence time in which the data was collected. In order to demonstrate this application,
the data from the prior example was extended to include rate data for a gangue particle
class (Table 4.3).
Figure 4.3 illustrates this discretization error principle, assuming a perfectly-mixed re-
actor. In this example, the “Distributed Rate” curve represents the real behavior that is
determinedfromthefulldistributionofrateconstants; whereas, the“CompositeRate”curve
represents the behavior derived from the single composite rate. From a practical standpoint,
thissinglerateconstantwouldbetheexperimentalvaluederivedfrompilot-scaleorfull-scale
testing. For this example, the rate data was composited at a residence time of six minutes
which would reflect experimental data taken at a mean residence time of six minutes. As
anticipated, the two curves overlap at this point, but as the residence time deviates from the
composite time, the discretization error increases rapidly.
119 |
Virginia Tech | CHAPTER 4. DERIVATION OF RATE CONSTANT COMPOSITING FORMULAS
The single component, single reactor example shown in Figure 4.3 is extended to in-
clude the gangue component and other reactor types. This example demonstrates not only
how rate constant compositing influences recovery of a second component but also how the
procedure influences grade projections. Figure 4.4 shows the distributed and composited
rate projections for copper recovery, gangue recovery, and copper grade for each of the three
reactor types. The axially-dispersed reactor was calculated for a Peclet number of 2. As in
the prior example, the “Distributed Rate” curve reflects the three rate data (fast, slow, and
non-floating rate constants), while the “Composite Rate” curve reflects projections from a
single rate constant which is the composite of the data at a residence time of six minutes.
Figure 4.5 presents this same data as a percent error between the two curves, assuming
the Distributed Rate curve represents the “true” values. Positive error reflects overesti-
mates from the distributed curve while negative errors represent underestimates from the
distributed curve.
4.5.2 Discussion
Though this data reflects one specific case, the behavior of the plots reveal several
general trends. First, for residence times lower than the composite time, the composite
curve always under-predicts the true recovery, regardless of the reactor type or the relative
magnitude of the rate constant values. Conversely, for residence times greater than the
composite time, the composite rate always over-predicts the recovery. This result coincides
with logical expectations. The composite rate corresponds to a snapshot at a single point in
time. The apparent rate a this snapshot reflects a specific mixture of fast and slow floating
material. In the real system, the recovery beyond this residence time will begin to curtail
because the fast floating material is being removed from the system at a faster rate than the
slow floating. Alternatively, projections from the composite rate assume the same mixture
of fast and slow material for all residence times, with the assumed mixture being equal
to the mixture that was present at the composite time. For residence times beyond the
composite time, the projection assumes a greater portion of fast floating material than the
true distribution in the real system. As a result, the projection always over-predicts real
recovery.
Second, the magnitude of the over or under-prediction is dependent upon the relative
magnitude of the original rate data. In this example, the gangue recovery is much more
susceptible to over-prediction than the copper recovery. Also, the original rate data for the
gangue components are roughly one order of magnitude lower than the original rate data
for the chalcopyrite. For example, Figure 4.5 shows that at a residence time of 20 minutes
122 |
Virginia Tech | CHAPTER 4. DERIVATION OF RATE CONSTANT COMPOSITING FORMULAS
in a plug-flow reactor, the gangue recovery error is approximately 70%, while the copper
recovery error is only 15%. Once again, this result coincides with logical expectation. The
relatively high rate values for the chalcopyrite components indicate that the recovery is likely
on the flat portion of the kinetic curve. In this region, small changes in the kτ value do not
correspond to large changes in recovery. Alternatively, the low rate values for the gangue
components likely indicate that the recovery is on steep portion of the kinetic curve, where
small changes in kτ correspond to large changes in recovery. Since the copper recovery values
are bounded by the upper recovery limit, over-predictions should show diminishing error as
the residence time is increased.
The difference in error magnitude is further supported by the 0 rate constant for the
non-floating gangue class. In the distributed system, the observable recovery will eventually
reach a limit since some portion of the material is truly non-floatable. The distributed
rate data for gangue recovery in Figure 4.4 shows this behavior for all three reactor types.
However, in the composited data set, this non-floatable class is assumed to float at the single
composite value. Thus the composite rate does not account for this truly non-floatable
material, leading to further deviation in the overestimation.
One notable case where this principle is especially important is in plant modification.
A common problem for flotation circuit designers is adding additional residence time to an
existing rougher bank. If the data set used to design the rougher bank only reflects one
residence time (e.g. the recovery and grade from the existing rougher bank), the projection
will always overestimate the expected recovery and underestimate the expected grade. To
alleviate this situation and minimize the discretization error associated with the projection,
data from multiple residence times (e.g. batch flotation) should be collected to ascertain
more elements of the floatability distribution.
4.6 Summary and Conclusions
This paper has presented the derivation of several rate constant compositing formulas.
While particles of similar size and composition are known to exhibit a distribution of rate
constants, the truncation of this distribution is often desired to form simple comparisons or
is mandated when the available data is not sufficient to derive the full distribution. Unlike
other physical properties, rate constants cannot be composited by simple weighted averages.
Instead, time-dependent and reactor-specific equations must be used to determine the appar-
ent rate constant that yields the same recovery as a the sum of all component rate constants.
125 |
Virginia Tech | CHAPTER 4. DERIVATION OF RATE CONSTANT COMPOSITING FORMULAS
For a plug-flow reactor, the composite rate constant (k∗) is given by:
(cid:32) (cid:34) (cid:35)(cid:33)
(cid:80)N M e−kiτ
k∗ = −ln i=1 i τ−1
plug (cid:80)N
M
i=1 i
while in a perfectly-mixed reactor, the composite rate constant is given by:
(cid:104) (cid:105)(cid:104) (cid:105)
(cid:80)N
M
(cid:81)N
(1+k τ)
i=1 i i=1 i
k m∗ ixed = (cid:80)N (cid:20) Mi[(cid:81)N j=1(1+kjτ)](cid:21) −1 τ−1.
i=1 (1+kiτ)
The axially-dispersed reactor model is too complicated to yield an analytical expression
for k∗. Rather, a numerical procedure using Newton’s method has been described.
From this investigation, three key conclusions are derived:
1. All three rate compositing formulas are time dependent. The resulting functions pro-
duce semi-log transitions as the composite rate constant varies through a continuum
of residence times.
2. The maximum observable rate constant at an infinitesimally small time is the simple
mass weighted average of the component rate constants. The minimum observable
rate constant at infinitely long residence times is reactor dependent, but only equal to
minimum component rate in the plug-flow reactor.
3. The composite rate constant formulas may be used to quantify discretization error
when a distribution of rate constants is truncated to a single value by single-residence
time experimental testing. In all cases, projections beyond the test residence time
show an over-prediction of recovery, while projections lower than the test residence
time always show an under-prediction of recovery.
The utility of these equations may also be extended to other data fitting and rate
comparison analyses. While these examples have only used two or three component systems,
the formulation of the equations promote unlimited scalability.
126 |
Virginia Tech | Chapter 5
An Algorithm for Analytical Solutions
and Analysis of Mineral Processing
Circuits
(ABSTRACT)
Traditional simulations of mineral processing circuits are solved by straightforward nu-
merical techniques which require iteration to accommodate recirculating loads. Depending
on the complexity of the simulated circuit, this solution technique can be inexact, computa-
tionally intensive, and potentially unstable. In this communication, an alternate calculation
approach is presented, wherein an exact analytical solution is determined as a function of
the individual units’ separation probabilities. All the stream data, including recirculating
loads, may be solved simultaneously, negating the need for iteration. Furthermore, with a
symbolic solution available, linear circuit analysis may then be used to diagnose the relative
separation potential of the circuit. By integrating these tools, the authors have developed
a software package for evaluating circuit configurations. This paper presents the theory,
development, and limitations of the software’s methodology along with industrial examples
which highlight the tool’s applicability to industrial circuits.
5.1 Introduction
The ultimate goal of all mineral and coal processing operations is the separation of
valuable components from the invaluable. Regardless of the sophistication or complexity, all
129 |
Virginia Tech | CHAPTER 5. AN ALGORITHM FOR ANALYTICAL SOLUTIONS AND ANALYSIS
OF MINERAL PROCESSING CIRCUITS
circuit configurations proposed during the design phase, computer simulation may not be a
viable option to compare each alternative.
Finally, analytical circuit evaluation represents a more balanced trade-off between re-
quired resources (data and time) and value gained. Unfortunately, these methods are often
unheeded due to the cumbersome mathematics required for multi-unit configurations and
the perceived inapplicability depending on the assumptions invoked.
When designing a process circuit, the balance between the aforementioned tools is
crucial. Each tool serves a specific purpose, and if utilized inappropriately, the tool may
produce erroneous and inaccurate predictions. For example, simulations and circuit analysis
are best implemented under the critical direction of experienced personnel. If simulations are
“blindly” conducted or do not reflect empirically observed limitations, the reliability of the
results may be substantially compromised. Therefore, the best approach to circuit design is
to utilize each of the three tools in their own context, while acknowledging the merits and
weaknesses of each.
This communication presents a refined approach to an analytical procedure originally
described by Meloy (1983). The concept, generically coined linear circuit analysis, draws
upon a simple mathematical approach to binary separators. By using these concepts to
determine an algebraic solution to the circuit streams, mathematical indicators may be
determined and used to compare circuit designs. This paper will provide a general review
of circuit analysis and the underlying theory. Next, the details of the current refinements
and the development of a circuit analysis software package will be described. Finally, the
software’s utility will be defined within the context of an industrial application.
5.2 Theory
5.2.1 Partition Curves
The primary purpose of circuit implementation is to overcome the inherent imperfec-
tion of single-stage separators. Consequently, any analytical circuit evaluation technique
must account for the reduction of these imperfections in various circuit configurations; thus,
the imperfections must somehow be mathematically defined. In the past, several researchers
have used partition functions to generically model various separation processes (King, 2001).
Partition functions rely on the predication that a simple separator receives feed which is
characterized by individual particles having a given distribution of a specified property (e.g.,
131 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.