University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Colorado School of Mines
|
The newly proposed method will be programmed in C# such that it can be applied
to a realistic case study to demonstrate its applicability. To define the optimum designed
phases of a given project using mathematical modeling, the following tasks are undertaken:
• Investigate the block aggregation technique known as the “Fundamental Tree”
algorithm to determine if it can be used for phase design optimization (Ramazan,
2001).
e Formulate the MILP model of the phase design optimization problem by taking into
account the mining capacity, multiple process capacities, and the blending
requirements.
• Investigate various solution strategies in developing an algorithm that reduce the
solution time of the large MILP model. The strategies to be investigated are as
follows:
o Applicability of the revised maximum network flow formulation and
bounding algorithms.
o Lagrangian relaxation.
o Constraint aggregation techniques.
o Strengthened sequencing constraints.
o Iterative algorithm based on relaxed LP formulation and solutions.
• Implement the algorithm and the strategies by developing a software code written
inC#.
• Apply the developed software to a realistic case study to demonstrate its
applicability.
3
|
Colorado School of Mines
|
• Perform an economic comparison of phase designs obtained by using the proposed
method with the phase designs obtained through commercially available mining
software.
1.2. Contents of this Thesis
Chapter 2 reviews open pit mine planning methods that have been used in
industry. The shortcomings of traditional phase design methods are described in this
chapter.
Chapter 3 reviews the previous work related to the open pit production scheduling
and open pit phase design. The techniques discussed in this chapter are exact
optimization, block aggregation, heuristic optimization, and phase designs optimization.
Chapter 4 provides the mathematical formulations of the phase design problem
used in this thesis.
Chapter 5 reviews the maximum flow algorithm to solve the ultimate pit limit
problem. The algorithm is revised in this thesis to demonstrate its capability to solve the
ultimate pit limit problem. The revised maximum flow algorithm is compared to the
Lerchs-Grossmann algorithm offered by available commercial mining software packages.
Chapter 6 introduces a new open pit phase design algorithm. The steps of
algorithm, the algorithm flowcharts, and a small example are included in this chapter.
Chapter 7 demonstrates the application of the new phase design algorithm. The
production schedules obtained by using the traditional phase design method and the new
phase design algorithm are performed on several mining projects.
Chapter 8 includes conclusions and suggestions for future work.
|
Colorado School of Mines
|
CHAPTER 2
REVIEW OF OPEN PIT MINE PLANNING METHODS
2.1. Background
After the discovery of the mineral deposit, the tonnage and grade is determined by
a sampling process. Using exploration data, the mineral deposit is located and outlined.
If an outline of a massive deposit is potentially mineable as an open pit, a block model is
developed to represent the deposit, as shown in Figure 2.1. The size of a block for a
typical open pit mine varies depending on geology and mining method. The size of the
blocks is generally considerably smaller than the drill hole spacing. The grade of each
block in the model is estimated using one of the estimation techniques such as distance
weighting or Kriging. The value of each block is calculated based on the resource
grades, e.g. gold (oz/ton), in that block using the economic parameters such as
commodity price, mining cost, processing cost, and the recovery for each resource to be
extracted.
Figure 2.1 Deposit represented by a 3D block model
5
|
Colorado School of Mines
|
The most common approach to the long term open pit mine planning problem is
dividing it into sub-problems similar to that shown in Figure 2.2. The approach starts
with assumptions about initial production capacities in the mining system and estimates
for related costs and commodity prices. Then, the ultimate pit limit is determined based
on the economic block values.
Start
Mining (milling) capacity
Production costs Production scheduling
Figure 2.2 Long term open pit mine production planning variables interacting in a circular
fashion (Dagdelen K., 1985)
An ultimate pit limit containing the set of blocks which has the maximum total
dollar value for a given block model can be found using current computer optimization
techniques. These techniques are based on the 3D Lerchs-Grossmann (Lerchs &
Grossman, 1965) and Johnson’s network flow (Johnson, 1968) methods. Both methods
guarantee finding the optimum pit in three dimensions (3D) regardless of block height,
width, and length proportions and give the ultimate pit limit which maximizes the value
of the identified set of ore and waste blocks that can be mined at proper slope angles
(Barnes R., 1980).
After the ultimate pit limits are defined, the development and design of phases or
pushbacks that will be mined during the progression of the pit is the next crucial step in
6
|
Colorado School of Mines
|
long term open pit mine planning as shown in Figure 2.3. The phase designs serve as the
basis from which to obtain Life-of-Mine (LOM) plans and the schedules that define the
future cash flows of a given project. The phases within the ultimate pit limits are
traditionally obtained by generating successively larger ultimate pit limits based on
economic block values that are calculated by using successively increasing prices. When
the commodity prices decrease or increase, the pit size can be decreased or enlarged,
respectively. This traditional price parameterization approach is based on determining
the ultimate pit limits by using Lerchs-Grossman’s (LG) method and has been used as a
phase design method for decades.
Ded(w Pkase Lixits
.» 'V
Figure 2.3 Cross-sectional view showing the ultimate pit limits and design phases A-F
(Surface mine design class notes, Dagdelen).
After phases are designed, the next step is to find the production schedule that
optimizes the cutoff grade and maximizes the net present value (NPV) of a given project.
2.2. Traditional Phase Design Methodology in Open Pit Mining
Once the ultimate pit limit is determined, the phases within the ultimate pit have
been traditionally obtained by finding the smaller ultimate pit limits based on reduced
block economic values, i.e., price parameterization. The traditional price parameterization
7
|
Colorado School of Mines
|
is based on the repeated use of the Lerchs-Grossmann method to find series of ultimate pit
limits on modified economic block models. The pit size can be decreased or enlarged by
changing the commodity prices in calculating the block economic values. Phases are
determined using a series of nested pits.
A small 2-D block model representation is provided to demonstrate the traditional
method of obtaining phases. As shown in Figure 2.4, the numbers in the blocks are gold
grades (oz/ton). In this cross sectional example, there are 3 rows and 9 columns.
1 2 3 4 5 6 7 8 9
1 0.00 0.00 0.01 0.01 0.00 0.00 0.05 0.05 0.00
2 0.00 0.09 0.07 0.00 0.00 0.00 0.07 0.07 0.00
3 0.00 0.00 0.00 0.02 0.03 0.00 0.00 0.00 0.00
Figure 2.4 2D view of a deposit showing gold grade (oz/ton)
Like regular ultimate pit limit, the physical slope requirement must be met and for
this example let us assume 45° pit slope angle. For example, in order to mine block [2,2],
blocks [1,1], [1,2], and [1,3] must be removed. To calculate the economic block values,
assume a mining cost of $2/ton material mined, processing cost of $8/ton, and metallurgical
recovery of 90%. Also assume that each block contains one ton of material. At $l,000/oz
for the price of gold, the economic block values are calculated as shown in Figure 2.5. The
set of economically minable blocks providing the highest total dollar value on the cross
section is shaded gray and is defined as the ultimate pit limits. The ultimate pit limit is
determined by using the LG method with a total value of $307.
In the traditional price parameterization phase design approach, the lower gold price
pits are always within the higher gold price pits, i.e., within the nested pits. The main idea
of this approach is that the pit that is profitably mined at the lower commodity price must
be profitably mined at the higher commodity price. For this example, the gold price is
8
|
Colorado School of Mines
|
reduced to $200/oz and $500/oz, respectively in order to find a set of smaller pit shells
inside the ultimate pit limit found at $l,000/oz for the price of gold.
The economic block values recalculated at $200/oz for the price of gold are shown
in Figure 2.6. The economic pit limit is also highlighted on the same figure. The ultimate
pit limit obtained by using the LG method applied to modified economic block values at
$200/oz for the price of gold is a much smaller pit and given below in Figure 2.6.
1 2 3 4 5 6 7 8 9
1 -2.00 -2.00 -1.00 -1.00 -2.00 -2.00 35.00 35.00 -2.00
’ -
2 -2.00 71.00 53.00 -2.00 -2.00 -2.00 53.00 53.00 -2.00
3 -2.00 -2.00 -2.00 8.00 17.00 -2.00 -2.00 -2.00 -2.00
Figure 2.5 2D view of an ultimate pit limit shaded gray showing economic block values
($) at $l,000/oz for the price of gold.
1 2 3 4 5 6 7 8 9
1 -2.00 -2.00 -2.00 -2.00 -2.00 -2.00 -1.00 -1.00 -2.00
2 -2.00 6.20 2.60 -2.00 -2.00 -2.00 2.60 2.60 -2.00
3 -2.00 -2.00 -2.00 -2.00 -2.00 -2.00 -2.00 -2.00 -2.00
Figure 2.6 2D view of an ultimate pit limit (shaded gray) obtained on the economic block
values calculated at $200/oz for the price of gold
At $500/oz for the price of gold, the economic block values are recalculated and are
shown in Figure 2.7. The economic pit limit is also highlighted on the same figure. The
ultimate pit limit obtained by using the LG method applied to modified economic block
values at $500/oz for the price of gold is larger than the $200/oz for the price of gold pit but
is still smaller than the ultimate pit limit at $l,000/oz for the price of gold.
9
|
Colorado School of Mines
|
1 -2.00 -2.00 -2.00 -2.00 -2.00 -2.00 12.50 12.50 -2.00
2 -2.00 30.50 21.50 -2.00 -2.00 -2.00 21.50 21.50 -2.00
3 -2.00 -2.00 -2.00 -1.00 3.50 -2.00 -2.00 -2.00 -2.00
Figure 2.7 2D view of an ultimate pit limit (shaded gray) obtained on the economic block
values calculated at $500/oz for the price of gold
As such, the first phase within the ultimate pit may be obtained by using the lowest
gold price and the next phase may be obtained by using the higher gold price. For this
small 2-D example, a smaller pit obtained by using $200/oz for the price of gold is defined
as phase 1. The remaining blocks left from the $200/oz for the price of gold pit and within
the $500/oz for the price of gold pit are defined as phase 2. The remaining blocks left from
the $500/oz for the price of gold pit and within the ultimate pit are defined as phase 3. The
three phases are shown in Figure 2.8. Phases 1,2, and 3 have the values of $118, $172, and
$17, respectively.
1 2 3 4
1
2 -2.00 -2.00
3 -2.00 -2.00 -2.00 8.00
Figure 2.8 2D view of a pit showing phase 1 (black), phase 2 (dark gray), and phase 3
(gray) obtained from price parameterization using the gold prices of $200/oz, $500/oz,
and $ 1,000/oz, respectively
2.3. Traditional Production Scheduling
Once these three phases are obtained, the next step would be to generate the yearly
production schedules. Let us simply assume that the annual mining capacity is 6 blocks
(tons). The schedule that mines 6 blocks per year by mining phase 1 in the first year, then
10
|
Colorado School of Mines
|
phase 2 in the second year, lastly phase 3 in the third year is given below. Note that block
location is shown in [row, column].
Table 2.1 Production schedule based on traditional price parameterization phase design
15% discount
Cash flow
Year Mined blocks NPV of Cash
($)
Flows ($)
[1,1],[1,2],[1,3],[1,4]
1 118.00 102.61
[2,2],[2,3]
[1,6],[1,7],[1,8],[1,9]
2 172.00 130.06
[2,71,12,81
[1,5],[2,4],[2,5],[2,6]
3 17.00 11.18
[3,41,13,5]
Total 307.00 243.84
Using a 15% discount rate, the NPV of this example is $243.84 when the
production schedule relies on the traditional phase design.
2.4. Shortcomings of Traditional Phase Design Methods
From the traditional phase design, the values of phases 1, 2, and 3 are $118, $172,
and $17, respectively. It is obvious for this small 2-D block model that phases 1 and 2
obtained by using the traditional phase design method, are not nested. Therefore, phase 2
which has the highest value, can be mined before phase 1 is mined.
Since a discount rate is used to calculate the NPV of the cash flows obtained from
the schedule, the early cash flows are more important than the later ones. As such, it makes
more sense to mine the combination of blocks that provides the high cash flow in the early
time period. If phases 1 and 2 obtained in the traditional phase design are switched, the
new phase design would lead to the production schedule that gives the highest NPV. Then,
the new phase design is optimal as shown in Figure 2.9.
11
|
Colorado School of Mines
|
CHAPTER 3
PREVIOUS WORK
3.1. Introduction
In this chapter, we review and investigate the previous work on phase design and
production scheduling. There are three sets of techniques. The first set is the exact
optimization based on formulating a mathematical model. Most exact optimization
techniques are summarized from production scheduling problems published by many
researchers in this area. The problem with exact optimization approaches is problem
size. Such problems are so large that direct solutions by present integer programming
algorithms on recently available computers are impossible. Therefore, the purpose of
most research is either to reduce the solution time of production scheduling problem or
problem size such that it can be applied to large open pit mines.
The second set consists of block aggregation techniques. These techniques are
used to reduce the number of variables (problem size) by aggregating blocks and schedule
the aggregate units using an MILP model.
The last set consists of heuristic optimization techniques which are currently being
used in the mining industry. These techniques used for open pit production scheduling
problems are based on the concept of parameterization (nested pits), a period-by-period
basis, and the MILP approach based on bench-phases. The current phase design methods
are only based on the concept of parameterization (nested pits).
3.2. Exact Optimization Techniques
During the production stage, there are three decisions to make. These are: to mine
or not mine a block of material; when to mine a block if it is to be mined; once it is mined
what to do with it (i.e., whether or not to process it). Dagdelen (1985) and Johnson (1968)
formulate the long-term production scheduling problem as a mathematical model to define
14
|
Colorado School of Mines
|
those distinct sets of actions. The mathematical model based on integer programming is
the exact optimization approach. Definition of variables and terms of the model are
described as follow.
BLOCK
n
X™* Variable
Let X™1 indicates fraction of block n to be used as material type m in time period t.
m = 1 for ore, 2 for waste (m can be more than two for multiple processing)
X™* = 1 if block n is mined as material type m in time period t.
X™* = 0 otherwise.
Since the objective of the model is to maximize the NPV of revenues, the decision
must consider the value of the block for each time period of the scheduling duration with
respect to its material type.
Let C™1 be the coefficient (which considers the time value of money) in the objective
function for variables representing block n for material type m in time period t.
The objective function is:
Maximize Z =
t m n
where: n =1,..., N; V is total number of blocks in the deposit
m =1,..., M; Mis material type
f =1,..., T; 7 is total number of time periods
15
|
Colorado School of Mines
|
Sequencing constraints for a given block n can be referred to by X™1 6 Tn. This
abstraction Xk e rn means the following: If the block n is mined, all the blocks in its cone
must be mined.
t t
X ^ - Y x ^ Z O V k,n € Tn,V t
t=l t=l
Available block volume constraints (a given block can only be mined once, either
as ore or waste) can be expressed as:
M T
X g tg l Vn
771=1 t=l
Let Omt and 0 be the lower and upper bound capacity requirements of the
destination m at time period t, respectively. Also let a„ be the volume of material in block
n. The lower bound and upper bound constraints for the destination capacity of material m
in time period t can be expressed as follow:
077it < anX^lt < Omt V m and t
n
For the feed grade requirements, let g„ be the grade assigned to block n. Also let
£mt and G™1 be the lower and upper bound grade requirements at the destination
(processing) that material m is sent in time period t, then the constraints for grade
requirements can be expressed as follows:
Gmt* Y < Gmt V m and t
n n n
This mathematical formulation for the multi-time period production scheduling
problem is difficult to solve. Dagdelen (1985) applies the Lagrangian and subgradient
methods to solve the multi - time period scheduling problem considering different types of
16
|
Colorado School of Mines
|
materials in his formulation. Using the Lagrangian method, he decomposes the complex
multi-time period problem into smaller single time period problems that can be solved
using optimum final pit design methods such as the maximum flow algorithm. He applies
the sub-gradient method to find the Lagrangian parameters in his mathematical
formulation. He also states that the Lagrangian method might not always converge to an
optimal solution, if the multipliers that result in a feasible solution for some constraint of
the problem do not exist.
Akaike and Dagdelen (1999) convert the integer programming formulation of the
multi-period production scheduling problem into networks by using Lagrangian relaxation.
The production capacity constraint is moved to the objective function, creating a long-term
production scheduling problem with the same characteristics as the final pit design
problem. This relaxed problem then can be solved by using Lerchs and Grossmann (LG)
algorithm as an ultimate pit limit problem.
Gaupp (2008) formulates the production scheduling problem as the MILP model
and uses methodologies to expedite the solution times for instances of this block
sequencing problem. He suggests three approaches to make the model more tractable: 1)
using deterministic variable reduction techniques (implementing earliest starts, latest starts)
to eliminate blocks from consideration in the model; 2) producing cuts that strengthen the
model’s formulation; and 3) using Lagrangian relaxation techniques. Using three
techniques suggested, an optimal (or near-optimal) solution is achieved more quickly than
solving the monolith (original problem). He also obtains feasible solutions where others
did not. For the Lagrangian relaxation techniques, he concludes that dualizing only one
constraint works best.
Bienstock and Zuckerberg (2010) describe an algorithm for solving linear
programming relaxations of the precedence constrained production scheduling problem
(PCPSP). They prove that any instance ofPCPSP can be reduced to an equivalent instance
of the general precedence constrained problem (GPCP) with the same number of variables
and of constraints. Lagrangian relaxation can provide early information on which
17
|
Colorado School of Mines
|
constraints from among those that were dualized are likely to be tight, and regarding which
variables are likely to be nonzero, even if the actual numerical values for primal or dual
variables computed by the relaxation are inaccurate. The information obtained from the
relaxation is used to solve a restricted LP problem with some additional constraints used to
impose the desired structure. They use the duals for constraints obtained in the solution to
the restricted LP to restart the Lagrangian procedure which should result in accelerated
convergence (faster than traditional Lagrangian relaxation schemes such as subgradient
method). Their algorithms prove effective on LP relaxation problems with many millions
of variables and constraints, obtaining provably optimal solutions in a few minutes of
computation. However, they do not yet have a proof of fast convergence and LP solutions
contain both integer and fractional solutions.
Bley, Boland, Fricke, and Froyland (2010) strengthen an integer programming
formulation for the open pit mine production scheduling problem by adding inequalities
derived by combining the precedence and production constraints. In many cases, it
significantly decreases the computational time to obtain the optimal integer solution.
Adding valid inequalities reduces LP relaxation gap and the number of branch and bound
nodes. Even though this technique is tested on two pits which have number of integer
vairables at the maximum of 4,200 or 420 blocks with 10 time periods, solving an integer
program for a larger block model is still difficult. So there is potential for further
computational improvement.
Moreno, Espinoza, and Goycoolea (2010) present an algorithm for solving the
precedence constrained knapsack problem (PCKP) where the knapsack can be filled in
multiple periods, this problem is known in the mining industry as the open-pit mine
production scheduling problem. The algorithm uses the LP relaxation of the problem and
an LP-based heuristic to obtain feasible solutions. The critical multiplier algorithm is
introduced and is based on two observations. The first is that in order to solve a (single
time period) PCKP instance, it suffices to solve two single-time period maximum closure
problems and to take a convex combination of the solutions. The second observation is
that in order to solve a (multiple-time period) instance of multi-period PCKP, it suffices to
18
|
Colorado School of Mines
|
solve a sequence of single-time period problems and combine the solutions correctly.
Computational experiments show that the algorithm can solve real mining instances with
millions of blocks in minutes, obtaining solutions within 6% of optimality between the
lower bound and the upper bound.
3.3. Block Aggregation Techniques
Due to the difficulty in solving the MILP model using block level resolution, block
aggregation (block clustering) techniques were developed and used to reduce problem size
and expedite solution time. The following techniques describe how to reduce the number
of variables by aggregating blocks and scheduling the aggregate units using a MILP model.
Johnson, Dagdelen, and Ramazan (2002) proposed the Fundamental Tree (FT)
algorithm using the MILP approach. The first implementation of the FT algorithm applies
to the production scheduling of phases already determined by LG for a defined number of
benches. The FT algorithm reduces the number of variables in the MILP model by
aggregating the blocks into Fundamental Trees that possess three properties:
i. The value of the aggregated blocks is positive.
ii. Extraction of a FT does not violate allowable pit slopes.
iii. A tree cannot be subdivided into smaller trees that possess properties i) and ii).
For the production scheduling problem in the FT algorithm, the MILP formulations
are based on models with one time period each called a period-by-period basis to speed up
the computational time. The FT algorithm was able to obtain better production schedules
compared to the available commercial software packages at that time. After our
investigation of this algorithm, multiple solutions obtained by using this technique exist
and their results may not be optimum. Also solving a series of single period problem does
not provide an optimum solution.
19
|
Colorado School of Mines
|
Boland, Dumitrescu, Froyland, and Gleixner (2007) use aggregation techniques to
reduce the number of binary variables in their problem formulation. Aggregates called bins
(sets of blocks) are used to schedule production at the mining process, while individual
blocks are used for processing decisions at the mills. The aggregation techniques allow the
solution of large problems in reasonable time since the number of bins is much smaller than
the number of blocks. The solution obtained by using block aggregation (binning)
approach is claimed to be very close to the solution obtained by using the level of blocks.
However, it cannot guarantee that its solution is close to the optimum solution.
3.4. Heuristic Optimization Techniques
Heuristic optimization techniques are currently being used in the mining industry.
These techniques are based on the concept of parameterization (nested pits), on a period-
by-period basis, and the MILP approach based on the bench-phases as described by the
following.
Elevli, Dagdelen, and Salamon (1989) attempt to improve the NPV of an open-pit
mine by using a single time period production scheduling algorithm. The developed
algorithm uses Lagragian relaxation to convert production scheduling for a single tonnage
constraint into a series of ultimate pit limit problems which can be solved by using the LG
algorithm. The algorithm provides the optimum production schedule for a given
production period and provides improved cashflows (which give higher NPV) compared to
production schedules done manually, but does not provide a true optimum production
schedule.
Chadwick (2007) reports that the MILP approach is being used in the mining
industry through a number of mine planning software packages, i.g., MineMax by
Combinatorics Pty Ltd and the XPAC Autoscheduler package by Runge Mining Pty Ltd.
Those commercial packages use a MILP which is solved by a commercial solver package
(for example, CPLEX). Their MILP formulations are based on a period-by-period basis to
speed up the computational time for a large problem. When the number of blocks in the
20
|
Colorado School of Mines
|
block model is too great, a reblocking technique is used to reduce the number of blocks by
increasing the dimension of the blocks. Using this technique, the number of variables in
the MILP formulations and the computation time decrease but the wall slope requirements
are poorly approximated as is the size of block model.
A period-by-period basis that is used to solve the multi - time period scheduling
problem reduces the number of variables in the MILP formulation while the constraints in
the problem are met. The blocks considered to be mined in a given period are removed
from the next period consideration. This approach always gives higher cash flow in the
first few periods. However, the solution from a period-by-period basis may not be the
optimum solution.
Kawahata (2006) uses Lagrangian relaxation procedure developed by Dagdelen
(1985) and includes a dynamic cutoff grade policy to maximize the pit’s NPV in his MILP
formulation. Using his optimizing algorithm, the variables are reduced by using two
Lagrangian relaxation sub-problems, one for the most aggressive mine sequencing case and
the other for the most conservative mine sequencing case. The algorithm has been used in
many different projects and scenarios. Using the cutoff strategy, the algorithm gives the
highest NPV while meeting production and blending constraints. However, the NPV is
obtained by a schedule which is guided by the designed phases because this MILP model is
based on the bench-phases optimization and is not a true optimum.
During the last 20 years, the Mining Engineering Department at the Colorado
School of Mines (CSM) has put considerable effort into developing and implementing
MILP models that optimize production schedules and cutoff grades to maximize the NPV
of complex mining operations. The outcome of the development is a program called
OptiMine6". This program was developed as a production scheduling and cutoff-grade
optimization tool to universally handle complex mining operations. The shortcoming of
this program is that it uses bench-by-bench block aggregation technique defined by pit
phases to minimize the number of variables in the problem formulation in order to reduce
21
|
Colorado School of Mines
|
the solution time of the MILP model. As such, the optimality of the production schedules
cannot always be assured.
3.5. Phase Designs Optimization Techniques
Many phase design algorithms have been developed heuristically, but they are
based on the repeated use of an ultimate pit limit algorithm on modified economic block
model values by changing the economic parameters (i.e., metal price). The pit shells
obtained by using those algorithms indicate where the next highest grade ore phase may be
but they may require extracting a large amount of waste material in order to mine those ore
blocks. Therefore, production scheduling based on parametric phase design may not yield
a schedule that results in a series of cash flows that maximizes NPV. Ramazan and
Dagdelen (1998) suggest an algorithm to design phases that indicate where the minimum
stripping ratio ore (which requires removing the least amount of waste material) will be.
However, phases that are designed to have a lower stripping ratio may not always be the
best solution for the production scheduling exercise. Furthermore, the proposed method
cannot handle the ore blending requirements of many open pit mines.
The phase designs obtained by using MILP models must be developed. The MILP
based phase design technique can improve the phase design that leads to a production
schedule that increases the NPV of a given project. The phase designs algorithm based on
the MILP approach would also take the ore blending requirements of the production
scheduling problem into account. The MILP formulation of the phase design problem is
similar to the multi - time period scheduling problem in an open pit mine. The MILP phase
design model under consideration in this thesis will use both the geologic and economic
block models as the basis for the problem formulation. Because of the large number of
variables involved in the formulation of the problem for a given block model, the size of
the MILP model can be very large and, as such, require a customized solution algorithm.
Therefore, the previous solution algorithms developed for solving open pit production
scheduling problems described in the literature review section have been studied and
investigated in an effort to develop an optimized phase design algorithm.
22
|
Colorado School of Mines
|
CHAPTER 4
MATHEMATICAL FORMULATION OF THE PROBLEM
4.1. Multi-time Period Problem
The MILP model of the phase design problem is similar to the multi-period open pit
production scheduling problem. Phase design is often used as a guide for production
scheduling, so it provides a guide to the best possible mining sequence. The phase design
problem is less constrained than the multi-period open pit production scheduling problem.
Let us make the following assumptions regarding the phase design MILP model.
- The destination of each block is predetermined.
- The stockpile option is not allowed.
Based on the above assumptions, the objective of the phase design MILP model is
to find the best mining sequence that would provide the highest NPV of a given project.
The definitions and formulation of the multi-time period problem for phase design are
given below.
Indices:
t, t’ : time periods (phases)
Sets:
b 6 B : set of all blocks b in the ultimate pit limit
b' E Bb : set of all blocks b' to be mined if block b is mined
d 6 D : set of all destinations d
hE H : set of all chemical contents h
23
|
Colorado School of Mines
|
All blocks in the ultimate pit limit must be mined.
Therefore, blocks that are not mined in the first time period must be mined in the
second time period. The resource constraint set (4.2.3) in the two-time period problem can
be rewritten as:
^ Xbdl + ^ Xbd2 — 1 V b E B
deD deo
Let Xbd = Xbdl, then Xbd2 = 1 - Xbd. This assumption helps reduce some
complexities of the model and the variable Xbd2 in time period 2 can be replaced by
(1 - Xbd).
The discount rate is not needed for a single-time period problem, so the objective
function is only to maximize the profit of that period. The minimum mining capacity and
minimum destination capacities can be removed because they will not be binding
constraints for the single time period problem. The resource constraint set is also removed.
However the blending requirements must be satisfied for the first time period and the other
period that is not mined. The definitions and formulation of the single-time period problem
for phase design are given below.
Decision variables:
Xbd E {0,1} : whether or not block b is to be mined and sent to destination d.
Mathematical Formulation:
Objective function
Maximize ^ Vbd^bd (Problem 4.3)
deD beB
29
|
Colorado School of Mines
|
Sequencing constraints
Y , Xb'd - > 0 Vb'eB'b ,beB (4A2)
deo deo
Mining capacity constraint
(4.4.3)
^ ab^bd ^ M
deD beB
The Lagrangian relaxation technique will be used to move the mining capacity
constraint multiplied by Lagrange multiplier (1) to the objective function. The right hand
side of this constraint is a constant which is the mining capacity, so it can be removed from
the objective function. The Lagrangian relaxation of the single-time period problem is as
follows;
Objective function
Maximize V V vbdXbd - A V V abXbcl
deD beB deD beB
The objective function can be simplified as
Maximize ^ (ybd - Aab)Xbd (Problem 4.4.4)
deo beB
Sequencing constraints
V6' 6B» ’b e B (44.5)
deD deD
This problem (4.4.4) is proven to be an ultimate pit limit problem with a set of
sequencing constraints (Johnson, 1968). Note that the Lagrange relaxation technique can
also be applied to blending constraints. When all constraints except sequencing constraints
are relaxed, the problem becomes an ultimate pit limit problem which can be solved by
32
|
Colorado School of Mines
|
existing solution algorithms, e.g., Learch-Grossmann and Johnson’s maximum flow
algorithms. However, the solution obtained is unlikely to satisfy the relaxed constraints.
Therefore, the new problem is to determine the values of Lagrange multipliers (X),
especially in which there are several constraints that need to be relaxed.
4.5. Ultimate Pit Limit Problem
The ultimate pit limit problem can be formulated in many ways. One of them is the
maximum-closure problem on the associated closure graph. Closure graphs are digraphs
(bipartite network) that include a source and a sink in which only source-adjacent arcs and
sink-adjacent arcs have finite capacities. All other arcs have infinite capacity. Johnson
(1968) was the first to recognize formally the relationship between the maximum-closure
problem and the maximum flow problem. He reduced the closure problem to another
closure problem on bipartite graphs which was, in turn, solved as a maximum flow
problem.
In formulating the open-pit mining problem, each block is represented by a node in
a graph and the slope requirements are represented by precedence relationships described
by the set of arcs A in the graph. The closure problem is considered as a directed network
G = (N, A), where N is the set of nodes and A is the set of arcs (n, m) in the network. Let
n 6 N+ be a set of all positive nodes and mE N~ be a set of all positive nodes, then
N+ U N~ = TV in the network.
An integer programming formulation of the ultimate pit limit problem which
provides the highest total undiscounted dollar value from a given deposit with respect to pit
slope requirements is similar to the closure problem. Let pn be an ore block, and wm be a
waste block. Also let vPn be an undiscounted net value of ore block pn, and vWm be an
undiscounted net value of waste block wm.
Decision variables:
XPn 6 {0,1} : 1 if ore block pn is mined and 0 otherwise
33
|
Colorado School of Mines
|
XWm G {0,1} : 1 if waste block wm is mined and 0 otherwise
The ultimate pit limit problem can be formulated as a simple zero-one integer
programming problem with the following structure;
Objective function
Maximize £ vWmXWm + £ vPnXPn (Problem 4.5)
mEN* nEN+
Subject to
Sequencing constraints
XWm -X Pn >0 v(n,m) e A (4.5.1)
A small two-dimensional block model example as shown in Figure 4.1 is given.
The model assumes that the blocks are square and the maximum pit slope is 45 degrees.
For example, to mine the block in row 2/ column 3, the blocks in rowl/ columns 2,3, and 4
must be removed. Each block is represented by an ore block pn or a waste block wm as
shown in Figure 4.2. A network showing arcs from all ore blocks to all waste blocks of
this block model is shown in Figure 4.3. The integer formulation of the ultimate pit limit
problem for this example can be formulated as shown in Figure 4.4.
Figure 4.1 Example: Two dimensional block model showing the block values
34
|
Colorado School of Mines
|
CHAPTER 5
REVISED MAXIMUM FLOW ALGORITHM
5.1. Introduction
An ultimate pit limit containing the set of blocks which has the maximum total
dollar value for a given block model can be found using the Lerchs-Grossmann (1965)
algorithm and also the maximum flow algorithm proposed by Johnson (1968). Both
methods find the optimum ultimate pit in three dimensions (3D) regardless of block height,
width, and length proportions. Both techniques give the ultimate pit limit which maximizes
the value of the identified set of ore and waste blocks that can be mined at proper slope
angles.
Currently, there is no commercial mining software package that implements the
maximum flow algorithm to solve the ultimate pit limit. The maximum flow algorithm is
chosen for this thesis instead of the Lerchs-Grossmann algorithm because it is required to
use a faster algorithm than what industry offers to solve each sub-problem in the new phase
design algorithm. Johnson’s maximum flow algorithm has that potential. Based on the
original algorithm, the revised maximum flow algorithm is developed and implemented. Its
results and solution times are compared to widely used commercial software packages that
employ the Lerchs-Grossmann algorithm.
This chapter describes the original maximum flow algorithm by Johnson and the
revised maximum flow algorithm. The results and solve time are compared to available
commercial software packages and are also discussed in this chapter.
5.2. Johnson’s Maximum Flow Algorithm
Johnson (1968) presents the original and most complete version of the maximum
flow algorithm. In that 1968 paper, the optimal ultimate pit limit problem is formulated as
a maximum flow problem in a bipartite network. This formulation is guaranteed to
38
|
Colorado School of Mines
|
generate the optimal ultimate pit limit. The approach is similarly discussed in Johnson and
Barnes (1988). The steps of the algorithm are accomplished as follows:
• Step 1. By their values, the blocks are sorted into two categories: a positive set and
a negative set. The positive blocks are lined up in a column on the left and the
negative blocks are lined up in a column on the right.
• Step 2. An arc is placed from each positive block to every negative block which
must be mined to free the positive block (every negative blocks within the positive
block’s cone). A flow capacity of infinity is assigned to each of these arcs
connecting positive blocks to negative blocks.
• Step 3. A source node. Node S, and a sink (terminal) node, Node T, are added to
the network. The source is connected to all positive nodes, and all negative nodes
are connected to the sink. A flow capacity equal to the corresponding block values
is assigned to the arcs connecting the source to the positive nodes, and a flow
capacity equal to the negative of the corresponding block values is assigned to the
arcs connecting the negative blocks to the sink.
• Step 4. The network formulated in Steps 1 through 3 is solved for the maximum
flow from source to the sink. In solving this maximum flow problem, the flow is
allocated from positive blocks to negative blocks.
• Step 5. All blocks which are in at least one of the following three categories belong
to the optimal ultimate pit.
o All positive blocks with unallocated potential flow.
o All positive blocks with flow allocated to a negative block contained in the
ultimate pit.
o All negative blocks overlying a positive block that belongs to the ultimate
pit.
39
|
Colorado School of Mines
|
Rather than formulating and solving all blocks at once, it is better to start at the
highest level and move down, adding one level of the block model at a time. The mined
blocks are removed after each iteration. The approach reduces the number of arcs and
nodes, the memory required, and can speed up the overall solution.
The best way to demonstrate the maximum flow algorithm is by a small two
dimensional block model example as shown in Figure 5.1. The model assumes that the
blocks are square and the maximum pit slope is 45 degrees. For example, to mine the
block in row 2/ column 3, the blocks in rowl/ columns 2, 3, and 4 must be removed. Note
that nothing is mined in the first two iterations (rows 1 and 2) using the bench-by-bench
approach. Therefore, the network is constructed at bench 3 (row 3) in iteration the third
iteration as shown in Figure 5.2. All arcs are directed arcs permitting flow only in the
direction of the sink.
The mathematical formulation of the general maximum flow problem is considered
as a directed network G = (M A), where N is the set of nodes (including the source s and
the sink t) and A is the set of arcs in the network. Every arc (z, J) has a capacity %. A
variable fy is defined as a flow in arc (z, j). For some value v>0, the problem is to determine
a flow/for which v (total flow to the sink t) is maximized.
Objective function
Maximize v (Problem 5.1)
Subject to
(5.1.1)
;:0',t)GA ;':(U)eA
(5.1.2)
fit = V
0 < fij < Uij V(i,y) EA
(5.1.3)
40
|
Colorado School of Mines
|
To solve the maximum flow problem, Johnson and Barnes (1988) employ the
labeling algorithm introduced by Ford and Fulkerson (1956). After the labeling algorithm
was presented, many additional algorithms to solve the maximum flow problem have been
introduced by number of researches (Goldberg & Taqan, 1988) (Gallo, Grigoriadis, &
Taqan, 1989) (Hochbaum, 2001 & 2008) to reduce the solution time. However, the
solution time is still a function of the number of nodes, the number of arcs, and the largest
arc capacity in the network. This thesis focuses on how to set up the maximum flow
problem that reduces the number of arcs in the network but not on the algorithm to solve
the problem. So, an available commercial solver IBM CPLEX is used to solve the
problem.
5.3. Revised Maximum Flow Algorithm
The original maximum flow algorithm works well with the bench-by-bench
approach. However, the overall solution time is still slower than the Lerchs-Grossmann
algorithm currently implemented by commercial software packages such as Whittle
Programming (Three-Dtm) or Mintec (MineSighttm Economic Planner). One of the
reasons is that the original maximum flow algorithm formulates the optimal ultimate pit
limit problem as a maximum flow problem in a bipartite network, so that the network size
increases dramatically as the number of benches increases. The other reason is that the
original algorithm uses the bench-by-bench approach which reduces the solution time in
each iteration by removing blocks to be mined from the network and adding the next row
(bench) for processing. Blocks to be mined are removed but some are not mined and are
still in the network of the next iteration which can get larger when it goes to the deeper
bench. This increases the overall solution time. To solve the ultimate pit limit problem
much faster, the revised maximum flow algorithm is developed based on the criteria that
we can map and formulate the whole problem in a single iteration (or two iterations if it is
necessary).
Unlike the original maximum flow algorithm which formulates the optimal ultimate
pit limit problem as a maximum flow problem in a bipartite network, the revised maximum
42
|
Colorado School of Mines
|
flow algorithm formulates the problem as a complex sequencing network. The main
differences are the following: (i) Flows from positive nodes to positive nodes are allowed
depending on the sequencing, (ii) The network has a source node but it can have multiple
sink (terminal) nodes, (iii) There is no bypass flow in the network, so that the number of
arcs is minimal.
The network problem is formulated as a linear programming (LP) problem but is
solved as a network structure problem using IBM ILOG CPLEX 12. Although the
objective of the LP problem is to maximize the flows, the actual value is not the main area
of interest. The desired result is obtained from an interpretation of the dual variable values
(shadow prices). The steps of the revised maximum flow algorithm are as follows:
• Step 1. By their values, the blocks are sorted into two categories: a positive set and
a negative set. All blocks (nodes) are positioned in a sequencing structure. All
negative (or zero) blocks correspond to the sink (terminal) nodes whose capacity is
equal to the absolute value of the corresponding block values.
• Step 2. A source node, node s is added to the network. The source is connected to
all positive nodes. A flow capacity equal to the corresponding block values is
assigned to the arcs connecting the source to the positive nodes.
• Step 3. For every positive block starting from the top bench to the lowest bench, an
arc is placed from each positive block to every positive block which must be mined
to free the positive block (within the positive block’s cone) except for positive
blocks which are already in a set of the positive blocks to be mined for that block.
A flow capacity of infinity is assigned to each of these arcs connecting positive
blocks to positive blocks.
• Step 4. An arc is placed from each positive block to every negative block which
must be mined to free the positive block (every negative blocks within the positive
block’s cone) except for negative blocks which already belong to another positive
block within the cone of the positive block being considered. A flow capacity of
43
|
Colorado School of Mines
|
infinity is assigned to each of these arcs connecting positive blocks to negative
blocks.
• Step 5. The network constructed in Steps 1 through 3 is solved for the maximum
flow from the source to the positive nodes. The mathematical formulation of the
revised maximum flow problem is considered as a revised network G = (N, A),
where N is the set of nodes and A is the set of arcs in the network. Let pn be a set
of all positive nodes n E N+ and wm be a set of all negative nodes mE N~ in the
network. For a flow / > 0, the problem is to maximize the total flow from the
source node.
Objective function
Maximize fsPn (Problem 5.2)
nEN+
Subject to three constraint sets
(1) the sum of flows in is equal to sum of flows out for all positive nodes
fspn + ^ fjpn X fpni = ° V n ew + (5.2.1)
j:(J,n)EA j:(n,j)€A
(2) the sum of flows into each negative (sink) node must be less than or equal to
the negative of corresponding block values vm (sink capacity)
2 fjwm s -v m Vm e N~ (5.2.2)
j:(j,rri)EA
(3) a flow out from source node must be less than or equal to the corresponding
block values vn
fsPn < vn V ne N+ (5.2.3)
44
|
Colorado School of Mines
|
• Step 6. All blocks which have an associated dual variable with value equal to 1
belong to the optimal ultimate pit.
The revised maximum flow algorithm is demonstrated here using the same small
two dimensional block model example as shown in Figure 5.1. The model also assumes
that the blocks are square and the maximum pit slope is 45 degrees. The network is
constructed as shown in Figure 5.3. Nodes from pitops are positive nodes. The negative
blocks are represented by sink nodes from W/ to wjq whose capacity is equal to the absolute
value of the corresponding negative block value. Flows from a source node to positive
nodes have capacities equal to the positive block values. All flows direct to the sink nodes
have infinite capacities.
Steps 3 and 4 are processed, so that the revised network is constructed. Starting
with the positive block at row 2 /column 3 (node pi), arcs from node pi to negative blocks
represented by nodes wj, wj, and W are added. For node p , add arcs from node P to
4 2 2
negative blocks represented by nodes w4, wj, and For node P , add an arc from node P
3 3
to a positive block (node pi) and also add arcs from node pj to negative blocks represented
by nodes w/, wj, w#, and wg. For node p4, add arcs from node p to positive blocks (nodes
4
pi and P ) and add an arc from node p to a negative block (node wg). For the last positive
2 4
node ps, add an arc from node ps to a positive block (node P ) and add arcs from node ps to
2
negative blocks represented by nodes wj, w?, W9, and w/o.
The LP formulation and solution to this problem are shown in Figure 5.4. We
interpret the result from the dual values (shadow prices) such that the dual value of 1 is
mined and the dual value of 0 is not mined. As shown by the dual variable values, all
blocks are mined except the blocks which are represented by the positive node ps, the sink
node w/, and the sink node ws. The interpretation of the solution is shown in Figure 5.5.
For this small example, the revised maximum flow algorithm constructs 24 arcs
total while the original algorithm has 41 arcs. For a real problem, the number of arcs
constructed using the revised maximum flow network is minimal, so that we can formulate
and solve the whole problem without using the bench-by-bench approach.
45
|
Colorado School of Mines
|
5.4. The Dual form of Revised Maximum Flow Problem
The dual of the revised maximum flow problem can be formulated by taking the
dual of the primal problem which is considered as a revised network G = (N, A), where N
is the set of nodes (a set of all positive nodes n 6 N+ and a set of all negative nodes
m 6 N~) in the network and A is the set of arcs (from positive nodes n to negative nodes
m and from positive nodes n to positive nodes n') in the network. Let:
p„ = The dual variable corresponding to the conservation of flow constraint (flows
in are equal to flows out from equations 5.2.1) for all positive nodes nE N+
wm = The dual variable corresponding to the negative (sink) node capacity
constraint (sum of flows in is less than the node capacity from equations 5.2.2) for
all negative nodes m E N~
un = The dual variable corresponding to the flow out from a source node to a
positive node (equations 5.2.3) for all positive nodes nE N+
where v„ is the corresponding block value of a positive node n
vm is the corresponding block value of a negative node m
The dual of the revised maximum flow formulation is shown as follows;
Objective function
Minimize V -vmwm + V vnun (Problem 5.3)
m€N- n€N+
Subject to
Sequencing constraints
wm - Pn > 0 V arcs from n t o mE A (5.3.1)
pn, - pn ^ 0 V arcs from nton' EA (5.3.2)
49
|
Colorado School of Mines
|
Constraints for flows from the source node to positive nodes
+ p„ > 1 V n6 N+ (5.3.3)
pn is unrestricted
wm, > 0
For the dual formulation example, the dual of the revised maximum flow problem
in Figure 5.4 is shown in Figure 5.6. There are two sets of constraints. The first set
contains sequencing constraints corresponding to flows from positive nodes to negative
nodes (5.3.1) and positive nodes to positive nodes (5.3.2). The second set contains
constraints corresponding to flows from the source node to positive nodes (5.3.3). At the
optimality of the dual problem, the solution is 22 which is equal to the solution of the
primal problem.
Since this problem has network structure with all columns or rows containing
0, -1, or 1 (equations 5.3.1, 5.3.2, and 5.3.3) then the matrix is unimodular and when the
right hand sides are also integer (either 0 or 1) then the values of the variables in the
optimal solution are also integer 0, -1, or 1 (Dagdelen, 1985). The values of the dual
variables wm and un of the revised maximum flow problem are greater than or equal to
zero and are integer (0,-1, or 1). Hence:
= 0 or 1 for all mE N~
«„ = 0 or 1 for all n E N+
The condition that the dual values of the revised maximum flow problem must be
either 0 or 1 can also be explained by the complementary slackness conditions (Kuhn-
Tucker conditions). The theorem identifies a relationship between variables in the primal
problem and the associated constraints in the dual problem. Specifically, it says that if a
constraint is not binding, then the associated variable must be zero. A variable may be
positive if the associated constraint is binding. The complementary slackness conditions
relating dual constraints and primal variables are the following;
50
|
Colorado School of Mines
|
Complementary slackness of dual sequencing constraints (equations 5.3.1 and 5.3.2)
(wm - pn) • fPnWn = 0 V arcs from n t o mE A (5.3.4)
(Pm - Pn) - fpnPn, = 0 V arcs from nton' E A (5.3.5)
Complementary slackness of dual constraints and primal flows from source node to
positive nodes (equations 5.3.3)
(1 — un — pn) * fspn = 0 VnE N+ (5.3.6)
At optimality of the primal problem (5.2), flows from source node to positive
nodes are maximized and fSPn >0. In case of flows are greater than zero, to satisfy
equation (5.3.6), (l — un — p„) must be equal to zero. Then
un = 1 - pn VnE N+ (5.3.7)
The values of the dual variables u„ of the revised maximum flow problem can be
either 0 or 1. Substituting un E {0,1} into the above equation gives
p„ = 0 or 1 for all nE N+
In case of no flow from the source node to the positive node (fSpn — 0), the above
condition can be explained by using the complementary slackness of primal constraints
(equations 5.2.3) when a flow out from source node must be less than or equal to the
corresponding block value vn.
(vn - fsvn) ■un = ° V n E N + (53.8)
If the corresponding block value vn > 0 and fsPn = 0, then un must be zero.
In case of 0 < fSVn < vn, then un must be zero based on equation 5.2.8.
51
|
Colorado School of Mines
|
From equations 5.2.3 (un + pn > 1), we get pn > 1. The variable pn is
unrestricted and can be integer 0, -1, or 1. Hence, p„ = 1 and this node is mined if the
corresponding flow (/spn) is zero or less than the corresponding block value (vn).
When pn = 1, dual sequencing constraints (equations 5.3.1 and 5.3.2) imply that
all nodes wm and pn, that connect to pn must be 1 and mined if pn block is to be mined.
In case of the sum of flows into each negative (sink) node wm is less than the
negative of corresponding block values vm (sink capacity), it can be explained by using the
complementary slackness of primal constraints (equations 5.2.2).
^ fjwm + vm I • wm = 0 V m e N (5.3.9)
When the sum of flows (fjWm) into node wm is not equal to vm (note that vm is
negative), wm must be zero in order to satisfy equation (5.3.9). If wm = 0, this waste
block is not mined. Also, since the dual sequencing constraints (equations 5.3.1) indicate
that pn must be zero if wm = 0, implying that if any negative waste block does not mint
have enough positive flow coming into it from all the positive valued ore blocks then that
negative waste block and all the positive valued ore blocks (pn) cannot be mined.
In case of the sum of flows into each negative (sink) node wm is equal to the
negative of corresponding block values vm (sink capacity), equation (5.3.9) implies that
wm can be either 0 or 1 depending on the positive nodes (pn) that can send the flow to this
negative node.
Therefore, all variables in the dual of the revised maximum problem are binary
variables (un, pn, wm E {0,1}) and equation 5.3.7 (ttn = 1 — pn) implies for all cases.
For example, the complementary slackness conditions involving the dual
constraints and the primal variables of the revised maximum flow problem in Figure 5.6
52
|
Colorado School of Mines
|
are shown in Figure 5.7. The complementary slackness conditions involving the primal
constraints and the dual variables are shown in Figure 5.8.
Theorem (Optimality of the revised maximum flow algorithm)
The dual of the revised maximum flow problem (5.3) is equivalent to the integer
programming formulation of the ultimate pit limit problem (4.5). The solution of the dual
problem (5.3) is also optimum for the ultimate pit limit problem.
Then:
A block represented by a corresponding dual variable of the revised maximum flow
problem is mined when the dual variable is 1 but it is not mined when the dual variable is
zero.
Proof:
For the solution of the dual of the revised maximum flow problem to be optimum
for the primal problem, it must satisfy equation (5.3.7) where un = l — pn VnE N+.
Replace un in problem (5.3), we get
Minimize V -v mwm + ^ vn( l - p n)
m€N- nEN+
Minimize -vmwm + vnpn
mEN- nEN+ nEN+
When Y,n vn is a constant and can be removed from the objective function, we obtain
Minimize V -v mwm- ^ vnPn
mEN~ nEN+
Multiply the objective function by -1, the problem is the maximization problem.
Maximize ^ vmWm + ^ vnPn
me AT new4-
53
|
Colorado School of Mines
|
Also replacing un = 1 —pn VnE N+ into constraints (5.3.3) for flows from
source node to positive nodes, we obtain
(i - Pn) + Pn > 1 Vne/v+
1 > 1 Vn e N +
This constraint set is always true and can be removed from the formulation.
At the optimality, we can rewrite the dual of the revised maximum flow problem
(5.3) as follows;
M aximize ^ vmwm + ^ vnpn (Problem 5.4)
then- neN+
Where vn is the corresponding block net value of a positive node n
vm is the corresponding block net value of a negative node m
Subject to
Sequencing constraints
wm — pn > 0 V arcs fro m n t o m E A (5.4.1)
pn/ - p„ > 0 V arcs fro m n to n ' E A (5.4.2)
pn, wm e {0,1} V n E N +, m E N~
When n e N+ is a set of all positive blocks, m E N~ is a set of all positive blocks,
and TV is a set of all blocks which includes all positive and negative blocks, we get
N + \JN~ = N
Problem (5.4) is similar to the integer programming formulation of the ultimate pit
limit problem. Problem (5.4) is derived from problem (5.3) based on the optimality
54
|
Colorado School of Mines
|
condition. Therefore, the dual of the revised maximum flow problem (5.3) is equivalent
to the integer programming formulation of the ultimate pit limit problem.
Dual variables are also called Lagrange multipliers (or just multipliers), dual prices,
shadow prices, or implicit prices. In the revised maximum flow problem, dual variables
can be interpreted as prices. Specifically, the i* dual variable is the price of the i* node.
Each of dual variables represents how much the objective of the primal problem would
change for one unit of the ith node. If the right hand side of the constraint represents the
node which belongs to the network that has excess flow (the ultimate pit that has total value
greater than zero) increases by 1, the objective function also increases by 1. If the node
does not belong to the ultimate pit, the objective function does not change when the right
hand side of the corresponding constraint increases.
For the dual formulation example, the dual of the revised maximum flow problem
in Figure 5.4 is shown in Figure 5.6. The variables W/ to Ww which represent negative
nodes (blocks) are mined if their values are 1 and not mined if their values are 0. The
variables pi to p$ which represent positive nodes (blocks) are mined if their values are 1
and are not mined if their values are 0.
The variables ui to do not represent the physical objects such as nodes or
blocks but they correspond to the variables pi to ps. The last set of constraints in Figure
5.6 ensures that a variable p must be 1 if a variable u is 0. The objective is to minimize
the sum of variables w and variables u. Minimizing the sum of variables u is the same as
maximizing the sum of variablesp (positive blocks).
In this example, the constraint that corresponds to the variable wy in the revised
maximum flow problem is not binding, so its dual value is zero. When its right hand side
increases, the optimum solution does not change. So, node w/ does not belong to the
ultimate pit limit. For the dual variable wj, its value is 1. Therefore, its corresponding
constraint is binding and does not have slack. When its right hand side increases from 2 to
3, the optimum solution will increase from 22 to 23. Therefore, node w? belongs to the
ultimate pit limit.
55
|
Colorado School of Mines
|
flow constraints in the revised maximum flow problem is minimal, the number of
sequencing constraints is also minimal. We can use the sequencing constraints obtained
by the revised maximum flow algorithm as the strengthened sequencing constraints in the
MILP formulations given in Chapter 4.
For example as shown in Figure 5.9, to mine block [2, 3], arcs are constructed
from block [2, 3] to blocks [1, 2], [1, 3], and [1, 4], which are represented by the 1st - 3rd
constraints in the dual problem (see Figure 5.6). To mine block [3, 3], arcs are
constructed from block [3, 3] to blocks [1, 1], [1, 5], [2, 2], [2, 3], and [2, 4], which are
represented by constraints 7th-11th in Figure 5.6, respectively. There are 8 arcs (also 8
constraints) required to mine positive blocks [2, 3] and [3, 3]. Note that arcs from
negative blocks [2, 2] and [2,4] to any blocks are not required.
2 3 4 5 6 7
1
1
2
3
Figure 5.9 Example of arc structure obtained from the dual of
the revised maximum flow problem
Pit slopes define the precedence relationship between blocks which is the arc
structure of the maximum flow problem or the sequencing constraint of the MILP
formulations. To handle a realistic pit slope in three-dimensional (3-D) space, we
employ a cone generation concept. In the 3-D uniform rectangular block model, cones
are constructed upward from a base point (normally a centroid of an ore block) to the
surface. The side angles of the cones are the maximum allowed pit slopes, as shown in
59
|
Colorado School of Mines
|
Figure 5.10. To mine the ore block at the base point, all blocks that are in the cone
(shaded) must be mined. When a block is partially in the cone, it is considered to be in
the cone if its centroid (represented by dots in Figure 5.10) falls in the cone.
Figure 5.10 Conical pit slope representation (Barnes, 1980)
The cone generation concept is extended to generate the precedence relationships
between blocks. When the base of the cone is moved to the bottom ore bench, there will
be more blocks in the sequencing set. Strengthened sequencing is introduced in this
thesis. The cones are generated from all ore (positive) blocks, as shown in Figure 5.11.
To mine the ore block A, all waste blocks in its cone (region 1) must be mined. To mine
the ore block B, all blocks in its cone must be mined, but the waste blocks in region 1 can
be removed from the sequencing set. Therefore, the sequencing set of block B only
includes the ore block A and all waste blocks in regions 2 and 5. Overlapping of the
cones is possible. For ore block C, all waste blocks in its cone (regions 3 and 5) must be
mined. Although region 5 is overlapping region of the cones from the ore blocks B and
C, there is no relationship between both ore blocks.
60
|
Colorado School of Mines
|
Deposit
Figure 5.11 2D view showing cones generated from ore blocks A, B, C, and
D which separate waste blocks (not shown) into regions 1,2,3, 4, and 5
For the ore block D, all blocks including blocks A, B, and C in its cone must be
mined. However, waste blocks in region 4 and the ore blocks B and C must be in the
strengthened sequencing set of the ore block D. The ore block A is not required because
it is already in the strengthened sequencing set of the ore block B. The waste blocks in
regions 1,2,3, and 5 are not included because they already belong to the ore blocks A, B,
and C.
The best technique to generate the strengthened sequencing set for a given ore
block is finding the blocks between the upper surface and its cone, as shown in Figure
5.12. For the example of ore block D, the upper surface is created by the following, (i)
Generate a set of ore blocks to be mined in order to mine block D. (ii) Multiple cones are
generated from the base of all ore blocks in the set. (iii) Based on the multiple cones and
the topography surface, the upper surface is the topography surface that is cut by those
cones.
61
|
Colorado School of Mines
|
From Figure 5.12, the strengthened sequencing set for block D is generated by
using the upper surface and its cone. All blocks that lie between the upper surface and
block D’s cone are considered to be in the strengthened sequencing set. Those blocks are
all waste blocks between the two surfaces and ore blocks B and C. Although the blocks
that are above the upper surface are within the block cone, they are not in the
strengthened sequencing set because they are intermediate predecessors of the ore blocks
B and C which are required to be mined if the two ore blocks are mined.
Deposit
Upper Surface
Block D’s cone
Figure 5.12 2D view showing the upper surface and the cone that are used to
generate the strengthened sequencing set for ore block D
The strengthened sequencing set demonstrates arc structure that is different from
the arc structure in the Lerchs-Grossmann algorithm. The differences are the following:
(i) An arc from a positive block to another positive block is not allowed in the Lerchs-
Grossmann algorithm but it is allowed here, (ii) Furthermore, it is not necessary to
construct an arc from a positive block to any negative blocks that already have arcs
connected from another positive block which is already connected to that positive block.
62
|
Colorado School of Mines
|
Therefore, the strengthened sequencing constraints obtained from the dual of the
revised maximum flow problem is the minimum set of sequencing constraints required to
satisfy the open pit slope requirement. Although it works for a 2D example, it is checked
carefully in 3D using a realistic mining project. Every cross-sectional view in all
directions is generated in order to check the slope requirement. None of violations has
been detected. We will use this strengthened sequencing constraints in the LP
formulations of the new phase design algorithm in order to speed up the solution time
because it is the most efficient sequencing constraints set.
5.6. Selective Moving Cone Method
The moving cone method has been used for solving the ultimate pit limit problem
for decades. It attempts expansion of the pit by removing frustums of cones with net
positive values. Values of the cones are determined by summing the individual values of
the blocks contained in the incremental cones. It has shortcomings which are missing the
combination of profitable blocks, over mining, and combination of both shortcomings. The
details of the moving cone method are explained elsewhere (Rachman, 1995).
To speed up the overall solution time of the ultimate pit limit problem, the selective
moving cone method is introduced as a preprocessor to the revised maximum flow
algorithm. Unlike the moving cone method, the selective moving cone method is not used
for solving the ultimate pit limit. It is used only to reduce the size of problem before the
revised maximum flow algorithm is applied.
The selective moving cone method starts from the top bench and moves to a lower
bench. In the calculation, it constructs cones upward from a positive block to the surface.
Initially, the cone value is equal to the positive block value at the cone base. The cone
value then is reduced by summing the value of the negative blocks in the cone. If there is a
positive block that is previously not mined in the cone, do not include its value in the sum.
If the sum of the cone value is less than or equal to zero, the current cone is skipped and the
next positive block is considered. If all negative blocks in the cone are included and the
63
|
Colorado School of Mines
|
cone value is still positive, all blocks in the cone are removed, i.e., they are considered to
be part of the ultimate pit. Then the next positive block is evaluated.
Since the selective moving cone method does not determine the optimum ultimate
pit, this method is very fast. The blocks that are removed by this method are definitely in
the ultimate pit limit. From the experiments, it reduces the problem size (the number of
positive blocks) by approximately 30-80%.
5.7. Computational Experience
The revised maximum flow algorithm is implemented by using Microsoft Visual
C# and IBM ILOG CPLEX 12.1. The program was tested on several case studies from
realistic mining projects (see Section 5.7.3). The ultimate pit limits obtained by the
computer program for the revised maximum flow algorithm are compared with the ultimate
pit limits obtained by two available commercial software packages that employ the Lerchs-
Grossmann algorithm
5.7.1. Computer Programs for the Lerchs-Grossmann Algorithm
There are two commercial mining software packages that were available for solving
the ultimate pit limits in this thesis. We have access to them for academic use. Both
programs are well known and have been used in the industry for many years. They
implement the Lerchs-Grossmann algorithm and their confidential techniques to solve the
problem faster.
The first software (Software-1) was first developed in 1980s by implementing the
Lerchs-Grossmann algorithm. The version used was released in 1993. This software is
used to find ultimate pit limits based on the Lerchs-Grossmann algorithm and can be used
for phase design using the parameterization concept (nested pits).
The other software (Software-2) is the mining software package which contains a
number of engineering tools, such as, 3-D visualization, pit design, phase design, and
64
|
Colorado School of Mines
|
production scheduling. It has two options to solve the ultimate pit limits which are the
moving cone method and the Lerchs-Grossmann algorithm. For comparison studies in this
thesis, the latest version released in 2010 was used.
5.7.2. Computer Program for the Revised Maximum Flow Algorithm
The revised formulation of the maximum flow problem was solved using Microsoft
Visual C# and IBM ILOG CPLEX 12. The program which was developed using this
solver requires input parameters such as the block dimension (cell size) and slope angle. It
reads all block models that are exported from Software-2 or any mining software packages
so that it does not need any geometry objects or topography surfaces used by those mining
software packages. The required input parameters and block model file are shown in
Figure 5.13.
r CSM Phase Design: Finding technological solutions for environmental and economic problems [ ca j Ib) im&weT
Welcome to CSM Phse Design software : Math for Mining 4:13:41,
Select Project ! C:\UsersNChcti\Desktcp\KD\
j Select Block Modd Rle | blockmodel.txt
IT Hock Dimension
X - 20 Y- 20 Z« 15
direction #
—r
Required input
tt Contents
Generate Ultimate Pit
Limit using Max Flow
Figure 5.13 Required input parameters for the revised maximum flow program
The program handles the pit slope angle by measuring the horizontal and vertical
distances from the centroid of a given block to the centroid of another block. This is
similar to what the commercial software packages use. In the case studies, a 45 degree pit
slope was used for demonstration but the program can handle any realistic pit slope angles
and can be further developed to handle more complex pit slopes.
65
|
Colorado School of Mines
|
5.73. Case Studies
There are three case studies that were used to demonstrate the application of the
revised maximum flow algorithm. All design parameters are shown in Table 5.1. Cell size
is the block dimensions. Number of columns is the number of blocks in the East-West
direction. Number of rows is the number of blocks in the North-South direction.
Table 5.1 Block models for three case studies
Block model 1 2 3
Name KD PH4D Model45
Number of blocks 14,153 40,947 2,140,342
Cell Size 20*20*15 m 50*50*20 ft 25*25*20 ft
Number of columns 78 62 140
Number of rows 42 69 296
Number of benches 19 65 70
Slope angle 45° 45° 45°
* Block Model-1
Block model-1 is a small model from a copper mine in Arizona which contains
14,153 blocks total. The ultimate pit obtained from the revised maximum flow program
has 12,163 blocks, 190.7M tons, and an undiscounted value of $652M. The results
obtained from the revised maximum flow program, Software-1, and Solftware-2 are shown
in Table 5.2.
This block model demonstrates how the 3D pit slope is handled in each program.
The results did not indicate any differences between the ultimate pits obtained by the three
programs. The visualization of the ultimate pit from Block model-1 is shown in the
66
|
Colorado School of Mines
|
• Block Model-2
Block model-2 is from a gold-copper mine in Nevada. There are 40,947 blocks in
the model. The ultimate pit obtained from the revised maximum flow program has 28,862
blocks, 121.1M tons, and a value of $292.4M. The ultimate pit obtained from Software-2
is the same as the result from the revised maximum flow program. But there are some
differences between the ultimate pits obtained from the revised maximum flow program
and Software-1. The details are shown in Table 5.3.
Table 5.3 Ultimate pit limits comparison for Block model-2
All blocks 40,947 Difference
between the
Ultimate Pit Limits Comparison for Block Model-2 revised
maximum flow
Revised maximum Software-1 Software-2 and Software-1
flow
Total blocks 28,862 29,002 -140
Ore blocks 17,538 17,543 -5
As same as
Total tons 121,123,562 121,711,092 -587,530
the revised
maximum
Ore tons 74,914,143 74,935,501 -21,358
flow program
Waste tons 46,209,419 46,775,591 -566,172
Total value 292,433,456 292,095,902 337,554
The revised maximum flow program and Software-2 find the same pit with the
highest value. Software-1 provides a slightly larger pit that has 558k tons more but its total
value is $338k less. For this case study, the revised maximum flow program and Software-
2 provide the true optimum ultimate pit while Software-1 seems to have the shortcoming of
over mining.
69
|
Colorado School of Mines
|
A group of blocks that are over mined by Software-1 is displayed in black in Figure
5.16. All 140 over mined blocks, of which 5 are ore (highlighted in different colors), are
shown in Figure 5.18. All waste blocks are displayed in gray while the ore blocks are
highlighted in red, yellow, and blue. The total value of all the over mined ore blocks is
$434,538 but the total mining cost of all waste blocks is $772,092. Instead of making a
profit, it costs $337,554 to mine those extra blocks. The top view and cross section view
are shown in Figure 5.17.
The technique that the commercial Software-1 uses to speed up its solution time
appears to be proprietary. Though this software mainly uses the Lerchs-Grossmann
algorithm, it may provide a suboptimal solution to the ultimate pit limit problem. For this
case, it obviously has the shortcoming of over mining.
Figure 5.16 3D view of Block model-2 showing the ultimate pit limit obtained by using the
revised maximum flow algorithm and a group of over mined blocks (black) obtained by
using Software-1
70
|
Colorado School of Mines
|
The group of blocks missing from the pit obtained using Software-2 is displayed in
black in Figure 5.19. All 5,598 missing blocks, of which 333 are ore (highlighted in
different colors), are shown in Figure 5.20. The top view and cross section views are
shown in Figure 5.21 and Figure 5.22.
Table 5.4 Ultimate pit limits comparison for Block model-3
All blocks 2,140,342 Difference
between the
Ultimate Pit Limits Comparison for Block Model-3 revised maximum
flow and
Revised maximum Software-1 Software-2 Software-2
flow
Total blocks 112,687 107,089 5,598
Ore blocks 32,707 32,374 333
Total tons 113,001,050 107,387,449 5,613,601
Not
Applicable
Ore tons 32,621,109 32,288,983 332,126
Waste tons 80,379,941 75,098,466 5,281,475
Total value l,492,897,346 1,491,890,287 1,007,059
Figure 5.20 shows all 5,598 blocks that Software-2 does not mine. This set of
blocks contains 5.6M tons and its total value is $1M. The result was verified by the
software developer but its shortcomings are not mentioned. However it is believed that the
shortcoming of missing the combination of profitable blocks in this commercial software
package may come from other techniques it uses beside the Lerchs-Grossmann algorithm.
73
|
Colorado School of Mines
|
5.7.4. Solution Time Comparison
An Asus N61J laptop with CPU Î7-720QM 1.6 GHz and 4 GB of RAM was used to
run the revised maximum flow algorithm program developed for this thesis and the latest
available commercial software package (Software-2) that employs the Lerchs-Grossmann
algorithm. Three different block models were solved multiple times to find the average
solution times. The average solution time comparison of both programs for each block
model is shown in Table 5.5.
Table 5.5 Average solution time comparison between the revised maximum flow
program and the commercial software
Average Solution time (seconds)
Block Model-1 Block Model-2 Block Model-3
Revised Maximum Flow 1 7 46
Commercial Software 2 13 83
Revise Maximum Flow solution time in percent faster than Software-2
50% 46% 45%
The solution time of the revised maximum flow algorithm is 45-50% faster than the
Lerchs-Grossmann algorithm implemented by the latest available commercial software
(2010 version). The revised maximum flow algorithm also guarantees the optimal solution
to the ultimate pit limit problems in all case studies. Because the commercial software
package uses some techniques to decrease the solution time of its Lerchs-Grossmann
algorithm, the optimality of ultimate pit limit solution may not be achieved. One of the
techniques used by the commercial software is cross section bounding (Barnes & Johnson,
1982). This bounding technique employs the 2D Lerchs-Grossmann algorithm to find the
ultimate pit for each cross section and uses the union of 2D pits as the approximate pit
before solving the 3D pit limit problem. However, this technique may eliminate the
77
|
Colorado School of Mines
|
CHAPTER 6
NEW PHASE DESIGN ALGORITHM
6.1. Introduction
After the ultimate pit is determined by using the revised maximum flow algorithm,
phase design is determined based on some reasonable assumptions. All blocks in the
ultimate pit must be mined during the phase design stage. However, it may be found later
during the production scheduling stage that some blocks are not extracted due to some
blending requirements or lack of mining or mill capacity. Phase design is often used as a
guide for production scheduling, so it should provide a guide to the best possible mining
sequence. Although the phase design procedure described here can handle multiple
destinations, the destination of each block is predetermined by using either its maximum
value or cutoff grade. However there is no destination allowed for stockpiling. Therefore,
the stockpile option will be handled during the production scheduling stage.
Based on the above assumptions, the objective of the phase design MILP model is
to find the mining sequence that would provide the highest NPV of a given project.
Although the phase design MILP model is less constrained than the production scheduling
problem, it is still difficult to solve. Therefore, it requires a customized solution algorithm
newly developed in this thesis to solve a phase design problem in order to obtain a solution
within a reasonable amount of time. The new phase design algorithm uses the combination
of various solution strategies to reduce the solution time of a large MILP model. The
solution strategies include the following.
• Applicability of the revised maximum network flow formulation.
• Lagrangian relaxation.
• Variable reduction.
• Constraint aggregation.
79
|
Colorado School of Mines
|
• Strengthened sequencing constraints.
• Iterative algorithm based on relaxed LP formulation and solutions.
The variable reduction and constraint aggregation techniques (time period
aggregation) are described as follows. Let t be the number of phases (time periods). If it is
desired to find a set of blocks that are to be mined in phase t, the variables and constraints
in phase 1 to phase t-\ can be aggregated. This results in a two-time period problem.
Based on the fact that all blocks in the ultimate pit must be mined, the two-time period
problem can be replaced by a single-time period problem. That is, if the blocks to be
mined in periods 1 to H can be determined, what blocks should be mined in period t are
also obtained (see example on Section 6.4). Thus the multi-period problem is solved
backward from the last time period to the first period.
A single-time period problem with a set of sequencing constraints and a capacity
constraint which can be removed by using Lagrangian relaxation becomes the ultimate pit
limit problem. A given ultimate pit limit problem is solved by using the revised maximum
flow algorithm. In order to satisfy the capacity constraint and/or blending constraints, the
Lagrange multipliers are adjusted. However, the gap problem may still exist such that the
required tonnage cannot be met. The solutions from the Lagrangian relaxation technique
are used as the revised maximum flow nested pits.
The single-time period LP model also uses variable reduction and constraint
aggregation techniques. Each of the LP models is formulated and solved backward from
the last period to the first period. The blocks to be considered for the LP model are the
blocks that lie between the last determined base pit and the last generated pit shell. For
example, if the base pit has been determined for phase f-3, all blocks to be mined in phases
t, /-I, t-2, and t-2> have been determined by method described in the previous paragraph.
The next step to determine phase f-4 blocks has resulted in a gap problem. The blocks that
are not included in phases /,/-!, t-2, t-3, and M must be delineated as their prior phase. It
should be noted here that the tonnage of blocks in the pre-determined t-4 phase is less than
80
|
Colorado School of Mines
|
the required phase tonnage and thus these blocks will be mined in phase 1, otherwise next
iteration will be executed.
Linear Programming is used to determine the phase designation for all blocks. The
LP formulation consists of a set of strengthened sequencing constraints which is the
minimum set of the sequencing constraints required for the pit slope. The strengthened
sequencing constraints are generated by using the same structure as the revised maximum
flow algorithm which is described by its duality. The LP formulation allows for fractional
solutions which means that some blocks can be partially mined, but this condition is
resolved as stated below or in the algorithmic steps and flowcharts that follow.
6.2. Steps of Algorithm
The algorithm consists of two parts. One is the revised maximum flow nested pits
algorithm with Lagrange multipliers and the other is a Linear Programming phase
determination algorithm. The general steps of determining phases using the revised
maximum flow algorithm with Lagrange multipliers can be written as follows;
0. Read the ultimate pit and use it as a base pit (pit 0)
1. Determine phase size based on mining or milling capacity.
2. Assign the initial Lagrange multiplier for the tonnage capacity constraint (see
Section 6.5 for the determination of the initial multiplier value) and set the
Lagrange multipliers of the blending constraints = 0.
3. Modify block values by using all the Lagrange multipliers.
4. Setup and solve the revised maximum flow formulation to find a pit shell based
on the modified block values from step 3.
5. If the difference in tonnage between the base pit and pit shell (step) 4 is less
than the required phase size or the gap problem exists (see Section 6.5), GO TO
step 6. Otherwise, adjust the Lagrange multiplier for the capacity constraint
(see Section 6.5) and GO TO step 3.
81
|
Colorado School of Mines
|
6. If all blending constraints are met, store the pit shell and GO TO step 7.
Otherwise, adjust the Lagrange multipliers for the blending constraints (see
Section 6.6) and GO TO step 3.
7. If the current pit shell is larger than the phase size, use this pit shell as a base pit
and GO TO step 2. Otherwise, EXIT TO the Linear Programming phase
determination algorithm.
From the revised maximum flow nested pits algorithm with Lagrange multipliers,
the general steps of the Linear Programming phase determination algorithm can be written
as follows;
0. Read all previously determined pit shells
1. Determine the number of phases (time periods) and set t = this number of
phases.
2. Set up a single time period problem by aggregating variables and constraints in
phases 1 to M.
3. Find the lower bound and upper bound pit shells. Formulate and solve the LP
formulation (Xj = 1 if block i is mined, 0 otherwise) for unassigned blocks that
lie between two pit shells.
4. For blocks that are not mined in the aggregate phases 1 to Z-l (x; = 0), GO TO
step 7.
5. For blocks that are mined in the aggregate phases 1 to /-I (x; = 1), GO TO step
8.
6. For blocks that may be mined in the aggregate phases 1 to t-\ (0 < x, < 1), check
the capacity constraint. If they are included in phase t and get closer to the
tonnage requirement, GO TO step 7. Otherwise GO TO step 8.
7. Blocks are assigned to be mined in phase t.
8. Blocks are marked as unassigned blocks.
82
|
Colorado School of Mines
|
9. If t = 2, the unassigned blocks belong phase 1. Otherwise, set f = f - 1 and GO
TO step 2.
10. Report phase design and END the algorithm.
6.3. Algorithm Flowcharts
The steps of the new phase design algorithm are summarized in two flowcharts.
The flowchart of the revised maximum flow nested pits algorithm with Lagrange
multipliers is shown in Figure 6.1. The flowchart of the LP phase determination algorithm
is shown in Figure 6.2.
There are three loops in the revised maximum flow nested pits algorithm flowchart.
It is necessary to note that a loop to adjust Lagrange multipliers for blending constraints
can be infinite if a feasible solution does not exist, i.e., all blending requirements are not
satisfied. This condition can be checked prior to solving the revised maximum flow nested
pits algorithm. For example, a processing plant requires an average Organic Carbon
content of mill feed to less than 0.6% but the average Organic Carbon content of all ore in
the ultimate pit is 0.9%. If nothing is left in the ground, this requirement is not satisfied
and the problem is infeasible. To avoid an infinite loop, the algorithm must stop before
entering the loop and the blending requirements must be checked if they can be met.
The flowchart of the LP phase determination algorithm has only one loop. The
algorithm determines the number of iterations in this loop. The number of iterations is
calculated as the number of phases - 1. For example, if there are 6 phases in the ultimate
pit limit, 5 iterations will be solved. The first iteration provides the solution for phase 6
then the second iteration yields the solution for phase 5, etc. Finally, the last iteration
provides the solution which is used to find both phases 1 and 2.
83
|
Colorado School of Mines
|
6.4. Example of the Algorithm
The following example demonstrates how the new phase design algorithm would
work on a small 2D block model with the block values as shown in Figure 6.3 and with a
mining capacity of 4 blocks per year. The ultimate pit limit is solved using the revised
maximum flow algorithm and contains 24 blocks.
-1 -1 -1 -1 -1 -1 -1 4 4
2 2 5 -1 4 4 4
1 3 4 4 8
4 2 9
Figure 6.3 Ultimate pit limit of an example showing block values
Start the revised maximum flow nested pits algorithm with Lagrange multipliers.
For this example, the mining capacity constraint is relaxed and is moved to the objective
function. The block values are modified (see Section 6.5) by using a Lagrange multiplier
(X=l .5). The pit obtained from modified block values is shown in Figure 6.4.
Modifled coi5. Lamdi1 = 1.5
-2.5 -2.5 -2.5 -2.5 -2.5 -2.5 -2.5 -2.5 -2.5
0.5 0.5 3.5 -2.5 2.5 2.5 2.5
-0.5 1.5 -2,5 -2.5 6.5
2.5 0.5 7.5
Max 1.5
0 0 0 0 1 1 1 1 1
0 0 0 0 1 1 1
0 0 0 0 1
0 0 0
Figure 6.4 Block values modified by using the Lagrange multiplier of 1.5
86
|
Colorado School of Mines
|
Using the modified coefficient, the profitable pit is the solution for the MILP
problem where the mining capacity is 9 blocks. By changing the multipliers, we can get a
different size pit but may not be able to achieve the exact pit size due to the gap problem.
However, each solution of Lagrange relaxation problems is the optimal solution for its
relaxed constraints. For this example, the gap problem exists and two pit shells are
obtained, as shown in Figure 6.5.
Pit-0 Pit-1
Figure 6.5 Pit shells obtained from the revised maximum flow algorithm
Now, execute the LP phase determination algorithm. Formulate and solve the
problem as a single-period problem using the information obtained from the revised
maximum flow nested pits algorithm with Lagrange multipliers. There are 6 phases that
have to be found in the ultimate pit based on 4 blocks per period (phase). Therefore, there
are 5 iterations that need to be executed. Let Xb be a decision variable where Xb = 1 if
block b is mined and 0 otherwise. The decision variable of each block is shown in Figure
6.6.
Xv4 XW2 Xw3 Xv/S X\vS XwS Xw7 X\vS XwS
Xpi Xp2 Xp3 X\vlQ Xp4 Xps Xps
Xp7 Xpg XW12 XW12 Xp9
Xpio Xpii Xpî2
Figure 6.6 Ultimate pit limit of an example showing the decision variable for each
block
87
|
Colorado School of Mines
|
The first iteration solves the last period, which is phase 6. The region between pit-0
and pit-1 that contains 15 blocks is considered. The objective is to find a set of 11 blocks
that maximizes the profit while satisfying slope constraints, as shown in Figure 6.7. In this
iteration, the formulation is given for demonstration purposes.
Maximize —xwi — xW2 — xW3 — xW4 — xwio — xwi i — xwi2
+ 2xpi + 2xP2 + 5xP3 + xp7 +
3xP8
+ 4xpio + 2xpii + 9xpi2
Subject To
— >=0
Xwi Xpi Optimum
— xpi >=0
Xw2
— >=0 Obj. = 15
Xw3 Xpi
>=0
Xw2 — Xp2 Xpi = 0
— >=0
Xw3 Xp2 Xp2= 1
— >=0
Xw4 Xp2 Xp3= 1
Xp7 = 0
~ >=0
Xw3 Xp3 Xp8= 1
XW4 — Xp3 >=0 Xpio = o
Xpll = 1
- >=0
Xpi Xp7 Xpl2 = 1
>=0
Xp2 — Xp7
— >=0
Xp3 Xp7 =0
Xwi
XW2= 1
XW3 = 1
>=0 XW4 = 1
Xwl2 -X pi2
XwlO — 1
Xwll = 1
XW12 = 1
Xwi Xw2 Xw3 Xw4 XwlO Xwi 1 Xwl2
+ Xpi + Xp2 + Xpio Xpi2
Xp3 + Xp7 + Xpg + + Xpi i + < = 1 1
0 <= x <= 1
Figure 6.7 The LP formulation in the first iteration of the LP phase determination algorithm
The solution of the first iteration is found using single-period LP formulation with
the tonnage constraint requiring 11 blocks. There are 4 blocks left. The corresponding
88
|
Colorado School of Mines
|
variables of the 4 blocks have values of zero which are Xwl, Xpl, Xp7, and Xpl0 . They
are determined to be in phase 6 as shown in Figure 6.8.
Phase 6
Figure 6.8 Iteration-1 of the LP phase determination algorithm
Iteration-2 solves phase 5. The region between pit-0 and pit-1 is considered but
phase 6 is already found. Therefore, there are 11 blocks to be considered. The objective is
to find a set of 7 blocks that maximizes the profit; then, 4 blocks are left. They are
determined to be in phase 5 as shown in Figure 6.9.
Phase 5
Figure 6.9 Iteration-2 of the LP phase determination algorithm
Iteration-3 solves phase 4. The region between pit-0 and pit-1 is still considered but
phases 5 and 6 are removed. Therefore, there are 7 blocks to be considered. The objective
is to find a set of 3 blocks that maximizes the profit. All 4 blocks that are left must be in
phase 4 as shown in Figure 6.10.
89
|
Colorado School of Mines
|
Phase 4
Figure 6.10 Iteration-3 of the LP phase determination algorithm
Iteration-4 solves phase 3. There are 3 blocks left in pit-1, so all blocks in pit-0 and
all blocks left in pit-1 are considered. Therefore, there are 12 blocks to be considered. The
objective is to find a set of 8 blocks that maximizes the profit. All 4 blocks that are left
must be in phase 3 as shown in Figure 6.11.
Phase 3
Phase 3
Figure 6.11 Iteration-4 of the LP phase determination algorithm
The last iteration (Iteration-5) solves phase 2 but the solution for phase 1 is also
obtained in this iteration. There are 8 blocks left in pit-1. The objective is to find a set of 4
blocks that maximizes the profit. All 4 blocks that are left must be in phase 2 and a set of 4
blocks that maximizes the profit is determined to be in phase 1, as shown in Figure 6.12.
Note that multiple solutions exist in this iteration but it does not matter which solution is
used.
Using the new phase design algorithm, the cash flows and NPV (15% discount rate)
are summarized in Table 6-1. This is the optimum solution to the problem and it is the
same as the solution obtained by solving it as the multi-period problem. The maximum
NPV is $21.65. All phases obtained in each of iterations are shown together in Figure 6.13.
90
|
Colorado School of Mines
|
iteration 4 blocks/period 6 periods
Figure 6.15 Phases obtained using a period-by-period basis
Using a period-by-period basis, the cash flows and NPV (15% discount rate) are
summarized in Table 6.2. The NPV is $19.18 which is less than the optimum NPV
obtained by the new phase design algorithm. It starts to mine at the highest profit in the
first period, so it starts at a positive block that has value of $5 which gives a net profit of
$2. But it is the wrong location and this solution will miss the opportunity to mine more
profitable blocks in the next period and delay the higher cash flows to later periods.
Table 6.2 Cash flows and NPV of a 2D example obtained by a period-by-period basis
Period Cash Flow ($) NPV ($)
1 2 1.74
2 2 1.51
3 3 1.97
4 6 3.43
5 9 4.47
6 14 6.05
SUM 36 19.18
There are many ways to use block aggregation techniques. Therefore, there can be
multiple solutions to the problem, i.e., blocks can be aggregated into different aggregate
units. For this example, the Fundamental Tree algorithm is used (Ramazan, 2001). The
trees (aggregate blocks) are constructed based on the priority (a block with higher value has
higher priority) of the positive block in the same bench. The algorithm selects the positive
93
|
Colorado School of Mines
|
block that has the value of $5 and the three blocks above it into the same aggregate unit
(FT-3). The aggregate blocks obtained by using the Fundamental Tree algorithm are
shown in Figure 6.16. There are a total of 12 trees in the model.
FT-l FT-2 FT-3 FT-4 FT-5 FT-6
FT-7
FT-10
FT-8
FT-9
FT-11 FT-12
Figure 6.16 Example showing aggregate blocks obtained by Fundamental Tree
algorithm
The solution of this problem obtained by using the block aggregation technique is
not optimum and is not better than the solution obtained by using a period-by-period basis
because the slope requirement only considers the aggregate blocks (FT-3) that contain the
positive block value of $5 and the three blocks above it. It is not possible to mine any other
aggregate blocks if FT-3 is not previously mined, e.g., FT-2 cannot be mined before FT-3
and FT-4 cannot be mined before FT-3. Therefore, only FT-3 can be mined in the first
period. The solution of this problem obtained by using the block aggregation technique is
provided in Figure 6.17.
Using the Fundamental Tree algorithm, the cash flows and NPV (15% discount
rate) are summarized in Table 6.3. The NPV is $19.13, which is less than the NPV
obtained by the new phase design algorithm.
94
|
Colorado School of Mines
|
size is the ultimate pit limit if there is no penalty. As the penalty increases, the pit size
decreases but is not directly proportional (see Figure 6.18).
Figure 6.18 The relationship between the Lagrange multiplier and pit size (Elevli,
Dagdelen, & Salamon, 1989)
Technically, the total penalty that can result in zero pit size is equal to the highest
original economic block value because there is no block that has a positive modified
economic block value. The possible highest Lagrange multiplier that results in this
condition is the highest original economic block value divided by the block tonnage. Now
we know two points on the curve of the relationship between the Lagrange multiplier and
pit size. Unfortunately, the relationship is not linear and it is difficult to predict that
relationship with an equation. However, linear interpolation is used to determine the
Lagrange multiplier for each ultimate pit limit problem. The multipliers and pit sizes
obtained in all iterations are useful because they provide more information about the
relationship between the multipliers and pit sizes.
The desired pit size may not be obtained when a series of Lagrange multipliers are
applied. This is called the gap problem. If it does exist e.g. the pit size does not change
within three iterations, the current solution is accepted then the algorithm continues to solve
the next desired pit size.
96
|
Colorado School of Mines
|
In order to illustrate the relationship between the multipliers and pit sizes, the
example is shown in Figure 6.19. The ultimate pit contains 75M tons of ore and 121M tons
total. The mining capacity is 52.5M tons and the mill capacity is 12.5M tons. For this
case, the mill capacity is a bottleneck and is a binding constraint which controls the phase
size. Therefore, the number of phases is 6.
The first pit shell (Pit-1) should be 62.5M tons of ore (5 phases * 12.5M tons). The
multiplier that does not generate any pit (zero ton) is the highest original economic block
value divided by the block tonnage and the multiplier that provides the ultimate pit is zero.
To obtain 62.5M tons of ore, the initial multiplier is linearly interpolated and is 77.4256 in
the first iteration. However, it generates a pit that contains 4,914 tons of ore. In the next
iteration, the multiplier is linearly interpolated again using the information from the first
iteration and 138,465 tons of ore is obtained. It requires 7 iterations to get Pit-1 because of
the lack of initial information about the relationship between the Lagrange multiplier and
the pit size. Eventually, the multiplier of 1.2098 which provides the 64.8M tons pit is
obtained and this pit is considered to be Pit-1.
For the next pit shell (Pit-2), it is required to find a 50M tons pit size. To determine
the next multiplier, we can linearly interpolate between the multiplier of 2.4196 which
gives 45.6M tons of pit size (iteration 6) and the multiplier of 1.2098 which gives 64.8M
tons of pit size (iteration 7). The Lagrange multiplier of 1.8147 is used to obtain 51M tons
pit which is determined to be Pit-2.
For pits 2 to 5 (iterations 8 to 11) in this example, the information obtained from the
previous iterations (iterations 1 to 7) is used and each pit is obtained without trying too
many iterations. There are total of 11 iterations needed to get 5 different pit shells. Note
that Pit-0 is not included because it is the ultimate pit limit. Therefore, the more
information about the relationship between the Lagrange multiplier and pit size we get, the
more accurate the calculation of the multiplier will be. The relationship between the
Lagrange multiplier of the tonnage constraint and the pit size (million tons of ore) of this
example is shown in Figure 6.20.
97
|
Colorado School of Mines
|
6.6. Techniques to adjust Lagrange multipliers for blending constraints
Determining the Lagrange multipliers for the blending constraints is more difficult
than determining the multiplier for a single tonnage constraint because the relationship
between the Lagrange multipliers and their corresponding blending constraints is not
known. For the blending constraints, it is necessary to blend the material to meet the
requirements for both periods (what can be formulated as a single-time period problem)
which are the inner pit and outer pit. In addition to that, if there are minimum and
maximum average chemical content requirements, there will be two blending constraints
for each period. Therefore, it is possible to have four blending constraints for each
chemical content and four multipliers are required.
However, we can simplify this complexity by knowing the fact that all four
constraints are not binding at the same time, e.g., if the minimum blending requirement is a
binding constraint, then the maximum blending requirement must not be a binding
constraint. The Lagrange multiplier for that constraint must be zero. Furthermore, the
average chemical content of the first period (inner pit) relates to the average chemical
content of the other period (outer pit), e.g., if the average chemical content of the first
period is higher than the overall average, then the average chemical content of the other
period is lower than the overall average. Using this relationship, all possible four blending
constraints for each of average chemical contents can be simplified by using a single
Lagrange multiplier for each of average chemical contents. This is done based on the
following techniques.
- Minimum and/or maximum blending requirements are replaced by a single average
chemical requirement, i.e., the average chemical content in the first period is equal
to the overall average chemical content which is a constant and can be removed
from the objective function.
- The Lagrange multiplier can be a positive or negative number. The positive
multiplier gives a penalty to objective coefficient function while the negative
multiplier gives a reward to objective coefficient function.
100
|
Colorado School of Mines
|
Now, a single Lagrange multiplier for each of average chemical contents simplifies
the problem. To adjust the multiplier, its direction (increase or decrease) and the step size
must be determined. For the direction, the multiplier increases when at least one of the
following conditions occurs.
The average chemical content of the first period is greater than the maximum
requirement.
The average chemical content of the other period is less than the minimum
requirement.
The multiplier decreases when at least one of the following conditions occurs.
The average chemical content of the first period is less than the minimum
requirement.
- The average chemical content of the other period is greater than the maximum
requirement.
We suggest the initial step size to be half of the average economic block value per
ton by expecting that it can make some changes to the modified economic block values. If
the upper bound and lower bound of the multipliers are found, the step size will be half of
the difference between those multipliers. However, it is possible that the step size is too
small to make any changes to the modified economic block values. For this case, the step
size is calculated using the difference between the previous multiplier and the multiplier in
the penultimate iteration. So, the step size increases for each iteration if the direction does
not change until it hits either upper bound or lower bound.
The Lagrange multipliers are updated independently in each iteration. However,
one multiplier can affect other constraints, so multiple multipliers may be adjusted at the
same time until all constraints are met. An example of adjusting the Lagrange multipliers
is shown in Figure 6.21 thru Figure 6.23. The blending requirements at the Sulfide mill are
as follows.
101
|
Colorado School of Mines
|
3.75 < average C03 < 5.3
- average Orge (Organic Carbon) < 0.6
- 3 < average Ssulf (Sulfide) < 4.5
In iteration 18, after the tonnage constraint is met, the average C03 in the first
period is 4.9307 but it is 3.3962 in the other period, which violates the blending
requirement. The Lagrange multiplier increases with the initial step size of 14.4274 (half
of the average economic block value per ton) in iteration 19. The Lagrange multiplier is
14.4274 and is set to be an upper bound. The lower bound is zero. Then it violates the
average C03 of the first period, so the multiplier decreases to 7.2137 (step size is half the
difference between the upper bound and lower bound) in iteration 20. By adjusting the
multiplier of the average C03, the average Ssulf also changes and violates the blending
requirement. Therefore, both multipliers are adjusted iteratively.
Eventually, the averages C03 of 4.3294 for the first period and 4.1025 for the other
period are satisfied by using the multiplier of 0.9017. The averages Ssulf of 3.7762 for the
first period and 4.3563 for the other period also satisfy the Sulfide mill requirements using
the multiplier of 3.6068. Then, this pit shell is feasible and is used as one of the nested pit
shells. Note that the average Orgc constraint is not binding, so its multiplier is zero. All
Lagrange multipliers for tonnage and blending constraints are calculated automatically by a
computer program which employs the new phase design algorithm (see Chapter 7).
For multiple blending constraints, it may not be possible to obtain a desired phase
(pit) size during the revised maximum flow nested pits algorithm with the Lagrange
multipliers stage. Normally phases obtained by the revised maximum flow nested pits
algorithm with Lagrange multipliers are large. However, desired phases can be found
using the LP phase determination algorithm. Note that solutions of some variables
obtained from LP formulations are fractional, so some constraints may be slightly violated
when the solutions are interpreted as integers.
102
|
Colorado School of Mines
|
CHAPTER 7
THE APPLICATION OF THE NEW PHASE DESIGN ALGORITHM
7.1. Introduction
In this chapter, four case studies from different mining projects are used to
demonstrate the capability of the new phase design algorithm program. Since the annual
production schedules are a function of the phase designs, the NPV of the production
schedules will be used as a critical factor when comparing phase design methods. The
stockpile option is not considered in this study. The results derived from the new phase
design algorithm are compared with those of a commercially available mining software
package that employs the traditional approach. Annual production schedules were
developed from the results of each phase design method in the same manner.
For an unbiased comparison between the two phase design approaches, the same
ultimate pit must be used for both designs. Both phase design approaches are based on the
ultimate pit limit obtained by using the revised maximum flow algorithm program because
it provides the optimal solution for the ultimate pit limit problem.
7.2. A Computer Program for the Traditional Phase Design Algorithm
For the traditional phase design, the latest version of a commercially available
mining software package, MineSight Economic Planner - MSEP (2010) was chosen.
MSEP uses the traditional approach for phase design which is similar to the price
parameterization method. It finds a series of pits that approximates a series of phases. This
is accomplished by using a value factor, which decreases or increases block values, as a
design variable in conjunction with a pushback (phase) width and a minimum number of
blocks per pushback (phase). For example, a value factor of 0.2 would only use the
positive blocks with a value in the top 20% for the first pass. If not enough blocks are
found, the value factor is increased until the minimum number is exceeded, forming a pit
106
|
Colorado School of Mines
|
shell. The factor is simply multiplied by the value per block for all positive blocks. The
value for negative blocks is unchanged. When the option “Multp” in MSEP is used, the
minimum value factor of 0.2 and maximum value factor of 1 are normally used, as shown
in Figure 7.1.
: I—^ - 1
7* MS-Economic Planner (Design) - 1.00-03 (Build 83) - [C:\Users\Cho...model45\195m.prj]
File Options Help
i
_J MineSight Economic Planner - Pit Design Multp
SI Define Surface Topography and Geometric MULTP configuration
éK-J Define Design Variable
Pit number 1 to be created (1 50) [Ï
H 0 Design Variable Definition
3~Cp
Calculate Design Variable Pit number 2 to be created (1 50) |50
IB 1
3D Block Model Items Part
IB Maximum N pits: | 501
Value Per Block Limits
IB
Define Economic Parameters for D Top bench (1-70): [Ï "51
- _J Read Design Variable
Bottom bench (1 - 70): |70 'e j
- _J Define Mining Cost Parameters First column (1 • 140): [Ï 3
IB
Costs and Discounting
I—{B| Variable Mining Costs by Bench Last column (1 • 140): |l40
[çHZj Define Pit Slope Angles
IB Fast row (1 • 296): (1
Pit Slope Angle Options
Last row (1-296): [296"
Q Design Strategy
Waste mining cost($/ton): |10
Design Strategy
Ore Density (or tonnage factor): |2 7
Waste Density (or tonnage factor): [2 7
Minimum value factor: [O 2
Maximum value factor: |l.O
EH_J Output Options Limiting pit number: [Ô
IB
Audit Options
Select Pushback Width
IB
Output Instructions
'• Benches r" Meters
[ÔÔ
ll
Figure 7.1 MSEP user interface showing the input parameters for the traditional phase
design using the “Multp” option
The phase size coming from the nested pit approach is difficult to control, so 50
nested pits (phases) are generated for each case study and some of them are combined such
that the influence of the size is minimal. Therefore, the phase design obtained from the
107
|
Colorado School of Mines
|
traditional approach is comparable in size to the phase design obtained from the new phase
design algorithm. The factors that make any differences between both phase designs would
be the location of each phase and their sequences.
7.3. A Computer Program for the New Phase Design Algorithm
The new phase design algorithm was programmed using C# language in Microsoft
Visual Studio 2008. It uses IBM CPLEX 12.1 to solve a series of LP formulations, so IBM
CPLEX must be installed prior to using this program. The program reads a block model
file which is dumped from any mining software package, e.g., Minesight3D. Note that
some mining software packages use different bench numbering systems. Some programs
count the number of benches from the top bench, e.g., MinesightSD and Whittle3D, but
some programs count it from the bottom bench, e.g., Newmont TSS. This is handled by
changing the bench direction in the program input panel. The program user interface is
shown in Figure 7.2.
CSM Phase Dciiqn: Finding techno logical solutions for environmental and economic proolerrs I ;
Please enter phase design parameters. 11:17.25 FM
Select Project ~] C:\U3ere'-JChîAXDesktcpXÎJevmian.
Select Bock Model Re blockmodel csv
Bock Dr mena on Pit Vail*» S 721.251.733
X = 25 Y * 25 Z * 2D Total Tons 246.637.308
k (bench) drection £ Wring Capacity 0 68000000
Pit Slope - 57
ft Destination» - 4 C D«t J.i DesU ! Des..3][0ea_4
«Contents = 5 t Mime: SuFide IA1 Tctal Tons 52.513.682
Armca Ccpaoty 3 3613000
Generate Ultimate Pit
Limit ueing Max Flow Name Avenge Rendmg mry «wnerts
Content 1 Au 00714 Mn Q 006 Max Q 0 03
Phase Design Content 2 co3 46277 Wn 0 375 Ma* 0 5.3
Content 3 cngc 03511 Min Q 03 Ma* @ 0.6
So be Phases Cancel
Content 4 ssulf 37386 Mr ® 3 Max 0 4.5
Contents c5 35712 Min O 3 Max S 4
Report Phase* Exit
Figure 7.2 CSM Phase Design user interface showing the input parameters for the new
phase design algorithm
108
|
Colorado School of Mines
|
The new phase design algorithm in the CSM Phase Design program can handle a
realistic pit slope, multiple destinations, and multiple chemical contents in the block model.
It also handles mining capacity and destination capacity, e.g., mill capacity or leach pad
capacity. The minimum and maximum blending requirements for each chemical content in
each destination are available in this program. The capabilities of this program are not
offered by any commercial mining programs that use the traditional phase design method.
Therefore, the blending requirements are not considered when comparing the case studies.
After phases are generated by the program, each phase is reported by bench from
the highest bench to the lowest bench within the same phase. This report is used to develop
annual production schedules. For the production schedule, blocks with lower phase
number must be mined before or at the same time as block with the adjacent phase number
(i.e., Phase 2 must be mined ahead of or at the same time as Phase 3) and the upper bench
must be mined before the lower bench in the same phase.
NPVs and cash flows obtained from the traditional and the new phase design
approaches are compared. The solve time of the new phase design algorithm is much
slower than the solve time of the traditional phase design method. Since solve times are
not comparable because the traditional method is much faster, only solve time of the new
phase design algorithm is reported.
7.4. Realistic Case Studies
All case studies are from real world mining projects. One of them is completely
mined, so the frill block model is available for this project. The other case studies are from
active mining projects. The new phase design method is compared to the traditional phase
design method for a copper mine in Arizona, a gold/copper mine in Nevada, and the
McLaughlin gold mine in California. The last case study is a gold mine in central Nevada
for which the original phase design was provided by a mining company. In this case, the
new phase design was compared to the original one. The details of all case studies are
listed in Table 7.1.
109
|
Colorado School of Mines
|
The ultimate pit is divided into 4 phases by using the traditional and the new phase
design approaches. The details in each phase are shown in Table 7.3 and Table 7.4. The
comparison of both phase design methods in 3D view, top view, cross section views, and
long section view are shown in Figure 7.4 thru Figure 7.8.
Using annual milling capacity of 10M, the comparison of production schedules
obtained by using both phase design methods in terms of cash flows and NPVs calculated
by using a 15% discount rate are shown in Table 7.5 and Table 7.6.
Table 7.2 Ultimate pit limit summary of a copper mine, AZ
All blocks 14,153
Total blocks in the ultimate pit 12,163
Ore blocks 5,905
Total tons 190,687,800
Ore tons 95,757,420
Waste tons 94,930,380
Total value ($) 652,084,472
Table 7.3 Traditional phase design of a copper mine, AZ showing tons, average grade,
and value in each phase
Phase Total tons Ore tons Avg.Cu Waste tons Value ($)
1 18,849,960 10,854,840 0.851 7,995,120 78,384,781
2 37,656,480 31,114,680 0.86 6,541,800 238,519,329
3 52,482,540 33,109,140 0.771 19,373,400 202,656,931
4 81,698,820 20,678,760 0.931 61,020,060 132,523,432
Sum 190,687,800 95,757,420 0.843 94,930,380 652,084,472
111
|
Colorado School of Mines
|
Table 7.19 New phase design of a gold mine. Carlin Trend, NV showing total tons, tons
in each of destinations, and value in each phase
Sulfide Oxide Oxide
Phase Total tons Waste tons Value ($)
Mill tons Mill tons Leach tons
1 65,606,685 3,498,346 998,313 6,278,772 54,831,254 189,456,878
2 84,933,240 25,356,991 419,733 2,040,538 57,115,978 437,729,187
3 58,726,177 10,860,751 136,838 577,766 47,150,823 63,047,177
4 37,431,806 12,797,593 11,573 1,011,612 23,611,027 31,018,491
Sum 246,697,908 52,513,682 1,566,456 9,908,688 182,709,083 721,251,733
Using the original and new phase designs, the production schedules were generated
based on 68M tons of annual mining capacity, 3.61M tons of annual Sulfide mill capacity,
and 1M tons of Oxide mill capacity. The mine life is 15 years. The comparison of both
phase design methods in terms of cash flows and NPVs based on a 10% discount rate are
shown in Table 7.20 and Table 7.21. Note that the 10% discount rate was given by the
mining company and is different from what was used in the other projects.
The total undiscounted value of this project is S721.3M. Using a 10% discount
rate, the NPV obtained from the schedule based on the original phase design is $304.3M.
The NPV obtained from the schedule based on the new phase design is $440.7M, which is
$136.4M (45%) higher than the other schedule obtained from the original design. Note that
the NPV difference between the new phase design and the original design increases when
the discount rate increases. The solve time using the new phase design program was 27
minutes.
The new phase design provides a much higher NPV but it is more difficult to use in
the actual operation than the original phase design because it has some areas that are too
small in which to operate some equipment. However it is a good idea to use the new phase
design as a guideline for a manual design which is more practical. The manual design
142
|
Colorado School of Mines
|
CHAPTER 8
CONCLUSIONS AND FUTURE WORK
The propose of the thesis is to develop a new phase design algorithm that can
generate phases to be used as the basis for obtaining production schedules that improve the
NPV of a given open pit mining project. As part of the new algorithm, the revised
maximum flow algorithm was developed to solve the ultimate pit problems. Currently, the
revised maximum flow algorithm is the fastest algorithm to solve the ultimate pit limit
problem compared to available commercial software packages and provides the true
optimal solution. The solution time of the revised maximum flow algorithm is 45-50% less
than the Lerchs-Grossmann algorithm implemented by the latest available commercial
software (2010 version). The revised maximum flow algorithm also guarantees the optimal
solution to ultimate pit limit problems in all case studies. Because the commercial software
package uses some techniques to reduce the solution time of its Lerchs-Grossmann
algorithm, an optimal ultimate pit limit solution may not be achieved. One of the case
studies indicates that the revised maximum flow algorithm finds the ultimate pit limit that
contains 5.6M tons and $1M more value than the ultimate pit limit found by the
commercial software package.
The revised maximum flow algorithm is included in the new phase design
algorithm and is used to solve each of the sub problems decomposed by using the
Lagrangian relaxation technique. Furthermore, the duality of the revised maximum flow
formulation provides the strengthened sequencing constraint set which is used in a MILP
model for solving the phase design problem. The strengthened sequencing constraint set
reduces the size of the MILP formulation and also reduces its solution time.
The new phase design algorithm was successfully developed. It uses the
combination of various solution strategies to reduce the solution time of a large MILP
model. The solution strategies are time period aggregation, Lagrangian relaxation,
applicability of the revised maximum network flow formulation to solve the ultimate pit
limit problem, strengthened sequencing constraints, and an iterative algorithm based on
146
|
Colorado School of Mines
|
relaxed LP formulation and solutions. The new phase design algorithm also handles the
complex problems of multiple destinations and blending requirements while the traditional
algorithm cannot.
The new phase design algorithm is programmed and tested such that it can be
applied to a realistic case study in demonstrating its applicability. The NPV improvement
by using the new phase design algorithm is up to 12% in comparison with the traditional
phase design method and is up to 45% in comparison with the manual phase design based
on the traditional phase design method.
Future work should continue to focus on the use of the Lagrange multiplier that can
be either positive or negative introduced in this thesis for blending constraints. These
penalty (positive multiplier) or reward (negative multiplier) values are applied to block
values using the average grade (chemical content). The Lagrange multiplier for blending
constraints should be able to apply to some blocks i.e. blocks that have grades higher than
the desired value.
Although the new phase design algorithm shows significant improvement in NPV
obtained from the production schedules that follow the phase design, engineers must
consider the equipment mobilization and operational requirements, e.g., working area for a
large equipment. This can be done manually by experienced mining engineers. The new
phase design with some practical adjustments still improves the NPV of a given project
compared to the traditional phase design. However, it is worthwhile to take the equipment
mobilization and other operational requirements into account for future research.
Although the new phase design techniques developed in this work may not provide
a true optimum solution, they have been demonstrated to improve NPV relative to current
commercial codes. The true optimum solution open pit mine production problem remains
unsolved, so work should continue to find further advance to solving this problem.
147
|
Colorado School of Mines
|
ABSTRACT
The main deciding factor on the purchase of a component brand for mining
equipment is usually the retail price or acquisition cost differential of that component
between different manufacturers. The components from different manufacturers will have
different contributions to equipment downtime depending on their individual reliabilities,
and therefore will directly influence revenue generation. The value of this downtime is
often significantly higher than the differences in retail price or acquisition cost between
the components. Compounding the situation is that mining companies do not have the
models, tools and/or protocols to evaluate different component reliabilities and their
immediate impact on the company’s cash flow. A comprehensive assessment of mining
operations shows that there are no tools readily available to quantify that impact. As a
result, this dissertation focuses on developing a methodology to establish these different
reliabilities, how they affect equipment availability, and the immediate impact on a mining
company’s cash flow. The methodology developed in this dissertation is evaluated by two
approaches or (1) deterministic and (2) stochastic, based on Monte Carlo Simulation. To
test the methodology, it is applied to two case studies involving hydraulic hoses from two
different surface mines to quantify the relationship between component reliability, mining
equipment availability and impact on the mining operation’s cash flow. The results from
the models indicate that the lower reliability data set caused lower equipment availability
and generated a larger net present value of the cash flows of the revenue loss than the
data set with higher reliability. The results also indicate that the methodology can be
successfully applied to any equipment component at any mine that uses a component
reliability centered management program. The greatest contribution of this research is the
methodology produced, which is a more viable alternative than conventional techniques
to support operationally and financially effective equipment component purchasing
decisions.
iii
|
Colorado School of Mines
|
3.4 The Pattern of Failures with Time
(Non-Repairable Items) 40
3.5 The Patterns of Failure with Time (Repairable Items) 41
3.6 Reliability Simulation Modeling 42
3.7 Reliability Economics and Management 43
3.8 Mining Equipment Reliability Fundamentals 48
3.8.1. Mining Equipment Reliability Measures 48
3.8.2. The Seven RCM Expectations for Equipment
Components 49
3.8.3. Mining Series and Parallel Systems 49
3.8.3.1. Open Pit Mining Series
System 50
3.8.3.2. Open Pit Mining Parallel
System 51
3.8.4. Considerations on Mining Equipment
Reliability 52
CHAPTER 4 REVENUE, COST FACTORS AND CHOICE OF DISCOUNT
RATE IN NPV ANALYSIS IN MINING 54
4.1 NPV Definition and Objectives 54
4.2 Revenue and Cost Factors in NPV Analysis in Mining 55
4.3 Determining an Adequate Discount Rate 55
4.4 Introducing Risk in Discounted Cash Flow Analysis 61
CHAPTER 5 RISK AND UNCERTAINTY IN THE NPV ANALYSIS 64
5.1 Introduction 64
5.2 Discounted Cash Flow Mining Related Parameters
to be Considered in the Risk/Uncertainty Analysis 67
5.3 Probabilistic Analysis of Cash Flows
and Present Worth (NPV) 68
5.3.1. Probabilistic Approach 68
5.3.2. Monte Carlo Approach 73
CHAPTER 6 RESEARCH METHODOLOGY 76
6.1 Methodology 76
6.2 Input Data 77
v
|
Colorado School of Mines
|
LIST OF SYMBOLS
C(t) Maintenance/failure cost function for the complex system
Λ Partition of a system S
λ Typical element of the partition of system S
Cλ(t) Maintenance/failure cost function for the element λ of partition Λ of
the complex system S
ϒλ(t) Maintenance/failure cost density of element λ of a partition Λ of
system S with respect to the failure distribution F λ(t)
Failure probability density of element λ of partition Λ of system S
ρλ(t) Cost density of failure of partition element λ per unit hazard rate of λ
f
cλ(t)
Cost density of maintenance of partition element λ corresponding to
m inspection time t i
cλ,i(t)
i* Minimum Rate of Return
RCM Reliability Centered Maintenance
FMEA Failure Mode Effects Analysis
FMECA Failure Mode Effects and Criticality Analysis
PdM Predictive Maintenance
PMO Preventive Maintenance Optimization
FTA Fault Tree Analysis
MORT Management Oversight Risk Tree
TBM Time Based Maintenance
OTF Operate to Failure
CMMS Computerized Maintenance Management Systems
PM Preventive Maintenance
FAA Federal aviation Administration
MTBF Mean Time Between Failures
MTTR Mean Time to Repair
ARCM Applied Reliability Centered Maintenance
OCM On Condition Monitoring
MCS Monte Carlo Simulation
xi
|
Colorado School of Mines
|
CHAPTER 1
INTRODUCTION AND BACKGROUND
Supply chain inputs, such as consumables (hereon called inputs) and equipment
components have a significant impact on the productivity and economic viability of mining
operations. Performance differences between products, such as conveyor belts, rock
bolts, and other consumable supplies, and replacement parts deeply influence the
availability and operating efficiencies of capital equipment and facilities found in mines
and processing plants. The common thread with all these inputs is that their quality and
reliability will have an immediate and potentially profound economic impact that directly
influences mine productivity and the direct costs of production. In addition, quite often this
impact is significantly greater than initially anticipated by the mine operator in terms of
scope and magnitude. The problem is often compounded by the fact that many mining
companies see these inputs as mere commodities and fail to invest effort in quantifying
the potentially deleterious effects in choosing one product brand over another.
Furthermore, many mining operators are not equipped with the right models and internal
capabilities to effectively evaluate the performance and true cost of competing products
from different manufacturers. The suppliers of these inputs, on the other hand, usually
understand the technical nuances and general applications of their products or services,
but may not fully comprehend the intricacies of a given mining operation to effectively
demonstrate how to evaluate the impact of their products on the entire mining production
chain. Still, mine operators are usually unable to allocate the time and resources needed
to partner with suppliers to perform this evaluation. A good example is a recent mining
company that decided not to implement a proposal that would increase their annual
revenue by $10M dollars by choosing one brand of these inputs over another. One
supplier had a much more reliable product and offered to prove the higher reliability to the
mining company through the implementation of a component replacement management
program. The customer decided not to implement the program, citing the difference in
purchase price between this supplier’s product and the competitor’s. The company shut
the initiative down despite the real possibility of increasing the revenue due to reduced
downtime of the two hydraulic shovels they had at the mine. Therefore, the overall intent
1
|
Colorado School of Mines
|
of this research is to allow purchasing agents and operational managers to quickly make
economic decisions on choosing different components for their mobile and stationary
mining equipment. These decisions should be based on the economic impact to the firm
resulting from the different reliabilities of equal components from multiple brands and not
primarily on the acquisition price of a given component.
1.1 Problem Statement
Many consumables currently used by mining companies are considered to be
commodities (no difference in performance between brands from different
manufacturers), and therefore, purchasing decisions for these items are made largely
based on price. In most cases, mining companies do not have the models necessary to
effectively evaluate the impact of such decisions because of the challenges associated
with estimating how often an equipment component will fail, the risk posed by the failure
to the overall system, and the total cost of that failure. Current reliability based algorithms
deal mostly with understanding the details of individual cost elements and integrating
them to produce the ultimate life cycle cost. For example, a typical equipment purchase
consists of costs that are comprised of the capital, acquisition, mobilization, transport to
site, commissioning, and local options. Parts, materials and labor represent the costs
associated with all the maintenance tasks required to keep the equipment operating at its
design capacity. Unfortunately, these types of analyses are generally made after
components have already been purchased. Ideally, the analyses should start before the
equipment is even manufactured and before any replacement components are purchased
after the equipment is in operation. This, in turn, will create opportunities for better product
and service research and development which will feed back into mining operations
creating a closed loop of continuous improvement. A review of global purchasing
practices at underground and surface mines has shown that no such protocol exists,
where purchases are still largely decided on price. Another aspect of this problem is the
key challenge facing modern maintenance managers in selecting the most appropriate
techniques to deal with each type of failure process in order to fulfill the operational
expectations of equipment in the most cost-effective and consistent way (Moubray, 1977).
Reliability Centered Maintenance (RCM) is about equipment and it starts with a
comprehensive review of the maintenance requirements of an equipment operating
2
|
Colorado School of Mines
|
regime to define the actions that will minimize the scheduled and unscheduled downtime
and the risk associated with a component’s failure. Purchasing and mine managers
should also take advantage of this approach to make better equipment component
sourcing decisions that are based more on component reliability instead of on differences
in the retail price of a component from different manufacturers.
1.2 Background - Historical Application of Reliability Centered Maintenance in
Mining Companies
The following case studies demonstrate the application of RCM at three large
mining companies and illustrate industry best practices regarding how such programs can
be implemented. From the discussion of these case studies, an analysis of their benefits
and limitations can be made that justifies the objectives of this research.
1.2.1. Suncor
Suncor is a leading oil sands mining company refining 462,000 barrels of crude
per day at surface mines in Alberta, Canada. “In their view, reliability is defined as an
asset performing its intended function, under stated conditions for a specified period of
time” (Jean Pierre Pascoli, 2014). It also focuses on the probability of failures and the
work that Suncor does to prevent failures. Suncor uses the principle of Asset Criticality
Ranking that identifies which equipment’s failure has the greatest potential impact on the
environment, health and safety, and other business goals. All maintenance significant
equipment is ranked in its current normal operating / process conditions and maintenance
programs, and this ranking will influence the level of rigor and depth of analysis
appropriate to the consequence of failure. Table 1.1 shows the ranking strategy. This
ranking strategy also motivates risk analysis where the consequence (severity) and
probability of failure need to be defined. In practice, Figure 1.1 shows how the ranking is
applied at Suncor. It is apparent that the success of this criticality ranking requires a fast,
accurate, and repeatable process, expert cross-functional representation from the
different departments at the mine or plant, an understanding of failure consequence and
risk, and pre-work that involves baselining operations as they are today, coupled with
their unreliability. The potential sources of unreliability can be expressed at a mine site as
shown in Figure 1.2. The success of equipment maintenance strategy development and
3
|
Colorado School of Mines
|
implementation is dependent upon a team approach that is aligned, where efforts are
prioritized and the maintenance philosophy is holistically considered end to end.
Table 1.1 Asset Criticality Ranking (Pascoli, 2014)
Asset Criticality Rank Receptors Consequence
Severity (arbitrary
dimensionless)
A - Safety Critical Health and Safety 4 to 6
B - Business Critical Regulatory 4 to 6
Reputation
Economic Consequences
Environmental
C - Normal All Consequence Categories 1 to 3
D - Non-Critical All Consequence Categories 1 to 3
This can be summarized in Figure 1.3. “This reliability based feedback loop is the essence
of Reliability Centered Maintenance at Suncor as it establishes the Asset Information
Baseline for components on equipment at the mine or at the plant, their Time to Repair,
Time to Failure, their Life Cycle and their criticality. In turn, these will define the
appropriate methodology producing the maintenance strategy to keep the equipment
operating at prescribed levels of performance and satisfactory levels of availability and
productivity” (Pascoli, 2014).
1.2.2. Cameco
Cameco is one of the world's largest uranium producers providing approximately
16% of the world's production from mines in Canada, the US and Kazakhstan (Arsenau,
2014). The company holds an estimated 429 million pounds of proven and probable
reserves, extensive resources and has focused its exploration programs on three
continents with land holdings estimated at 4.2 million acres. This discussion focuses on
4
|
Colorado School of Mines
|
the implementation of a Reliability Centered Maintenance Program at Cameco’s Port
Hope Conversion facility in Canada. The facility is designed to convert purified uranium
trioxide (UO ) to uranium hexafluoride (UF ) and uranium dioxide (UO ). These are
3 6 2
intermediate products required in the production of fuel for light water and Candu-type
heavy water nuclear reactors. A 2008 Reliability Assessment found that most of their
maintenance practices were reactive as shown in Figure 1.4. The scoring system used
an arbitrary system of numeric weights depending on whether the parameter being
assessed was being addressed in a reactive, emerging, proactive, or excellent manner.
The assessment’s final score of 0.313 showed a reactive approach to maintainability and
reliability and indicated that the company lacked a forward-looking plan that was clear,
embraced work processes, had effective key performance indicators (KPIs), and created
responsibilities and accountability. This resulted in low employee involvement in
improvement initiatives and low operating reliability of the facility. Ultimately, the low
operating reliability caused high operating costs, uncertain fulfillment of future demand,
and low confidence from regulators and the public at large. These shortcomings were
really opportunities for operational improvement, and a program was put in place in 2010
to realize the benefits of such opportunities. This program aimed at enabling consistent
production at the licensed design capacity to attain greater efficiencies and reduce unit
manufacturing costs. Cameco partnered with Life Cycle Engineering and created four
internal management teams focusing on Work Processes, Materials, Reliability, and
Operations. In addition, supplementary special teams, a site steering team, and a
corporate support team were also created. The objectives of these special teams were to
achieve quick wins, remove roadblocks, perform risk based asset management analysis,
and manage the culture change process. The organizational chart of this structure is
shown in Figure 1.5. The Reliability Engineering program adopted a newly created
Reliability Engineer role, implemented a criticality component equipment ranking, a
Failure Mode Effect Analysis (FMEA), and a Root Cause Analysis Report. This report
documented and described an incident, its timeline, probable cause, and recommended
action plan to avoid the occurrence of the incident in the first place. Another important
development was the Production Loss Tracking in real time with the production losses
being displayed on a monitor in the control room. The tracking involved the top 10 main
8
|
Colorado School of Mines
|
paretos sorted by equipment type and failure cause and was reviewed monthly with
corresponding actions developed for the top three issues for each process area (Arsenau,
2014).
Sustainability
Reliability Equipment Configuration Budgeting and Management
Engineering Audits History Management Cost Control Reporting
Optimization
Facilities
Organizational Work Loss and
EAMS Supervision Structure Training Work Planning Measurement Elimination Equipment
Processes
Scheduling Equipment Preventive,
and and Process Predictive Materials
Work Control Coordination Operator Care Design Maintenance Procurement Management
Culture
Status Governing Objectives and Organizational Performance
Assessment Principles Goals Master Plans Behavior Management
Principles
Plant Management
Partnerships Commitment
Reactive Emerging Proactive Excellence
0.000 to 0.399 0.400 to 0.549 0.550 to 0.749 0.750 to 1.000
Final Score 0.313Reactive
Figure 1.4 Cameco Reliability Assessment (Arsenau, 2014)
A weekly KPI dashboard was also created with performance measures for the four
management teams, including plant uptime and production. This improved plant reliability,
promoted change in behavior, and resulted in UO2 and UF6 production records achieved
in March 2013 and December 2014, respectively. Still, the main challenges of the
implementation of the Reliability Centered Maintenance program were the lack of an
automated KPI reporting tool and of a platform that could readily show the financial gains
derived from the application of the program.
9
|
Colorado School of Mines
|
1.2.3. Sungun Copper Mine
Sungun Copper Deposit is the second largest copper mine in Iran. Geological
reserves of the deposit are estimated to be 828 million tons with an average copper grade
of 0.62% (Mohammad and Sattarvand, 2014). The mining operation employs a fleet of
52-HD 325 Komatsu (32-ton) and 20-HD 785 Komatsu (100-ton) trucks, respectively. This
work is limited to the maintenance operation analysis of mine haul trucks and describes
the applied reliability centered maintenance analysis process. The first aim is to decrease
trucks’ sudden failures and breakdowns, improve the service lifetime, and to reduce the
maintenance costs. The reliability management program coupled with preventive
maintenance and inspection decreases the unexpected failures, enhances the service
lifetime and reduces the maintenance costs. Collection of failure data was the first phase
in this project. The resulting database was composed of operation time, age of the trucks,
and maintenance data recorded for the trucks’ components. The assumption was made
that the hardware, function, operational conditions, procedures, system structure,
location, and environment were all similar. As a consequence, 10 haul trucks with
approximately 15,000 operating hours each were chosen to support this assumption.
Figure 1.6 demonstrates how the haul truck system is broken-down into its subsystems
and components to analyze the system reliability. The failure mode and effect analysis
(FMEA) approach was conducted to assess the significant failure modes for each
component. Reliasoft’s Weibull++8 software was used to evaluate the reliability of both
repairable and non-repairable items. In the repairable items, the Monte Carlo simulation
and Maximum Likelihood Estimator method are combined for estimation of the Power
Law Process (PLP) statistical parameters. This process is implemented by generating
random variables from conditional Probability Density Function (PDF) for components’
failure times. Random variable sampling is accomplished in conditional Cumulative
Distribution Function (CDF) plots. It is assumed that the time to first failure is fitted by the
Weibull distribution.
11
|
Colorado School of Mines
|
For non-repairable items, the probability plot provides a means of calculating the Weibull
distribution parameters and assessing their reliability (Figure 1.7). Initially, the Time
Between Failure (TBF) data are obtained and a suitable statistical distribution in the
reliability software Weibull ++8 is fitted for each failure data set. Results of statistical
estimations for some of the haul truck components (repairable and non-repairable items)
are given in Table 1.2. The reliabilities of the haul truck system and its subsystems are
temporarily estimated by the selection of an optimally fitted distribution and by considering
their estimated parameters shown in Table 1.2. Figure 1.8 indicates that the truck’s
transmission is the most unreliable subsystem and the frame and body are the most
reliable subsystems (reliabilities for two sub-systems at 3,000 operation hours are 3x10-6
and 0.36, respectively) in comparison to other subsystems’ reliabilities. Availability is
another metric for performance analysis of the maintainable equipment. The availability
of the truck system is a function of the subsystems and components’ reliabilities and
maintenance efficiency. The input data for calculating availability are reliability distribution
functions and the chosen maintenance policy for a specified time period. Monte Carlo
simulation is utilized for modeling the failure behavior under the maintenance policies.
The maintenance KPIs for the simulation process include time to repair, delay time for
fault detection and diagnosis, and logistic delay times. The maintenance policies include
inspection, correction, and preventive maintenance. The logistic delay times are the
required time for providing spare parts and time before the crews start the tasks. Figure
1.9 shows the availability and downtime of the various maintenance policies. The RCM
program at the Sungun Copper Mine resulted in better maintenance decisions,
establishment of condition monitoring of critical items, and efficient inventory of spare
parts, and their reorder level. Therefore, equipment productivity and availability increased
and operating costs declined.
13
|
Colorado School of Mines
|
Feature Value Feature Value
Mean Availability (All Events) % 91.84 Uptime/Hrs 11,021
Point Availability (All Events) at 12,000 Corrective Maintenance
hours/% 93.50 Downtime/hrs 766
Mean Availability (w/o PM & Inspection)/ 93.61 Inspection Downtime/hrs 133
Preventive Maintenance
Meant Time to Failure/Hrs 95.37 Downtime/hrs 80
Figure 1.9 Simulation Results for Estimating Availability and Downtime
(Mohammad and Sattarvand, 2014)
1.2.4. Benefits and Limitations of the Case Studies and Research Justification
The case studies show that the companies were thorough in their technical
treatment of the implementation of well-structured Reliability Centered Maintenance
programs. The implementation of these programs increased process efficiencies and
produced significant production gains due to an increase in availability of process and
equipment components. However, they all failed to quantify, from a financial perspective,
the changes in availability and production gains caused by changes in process or
component reliability. As discussed in one of the studies, the most important deficiencies
were the lack of an automated KPI reporting tool and a platform that could readily show
the financial gains from the application of the Reliability Centered Maintenance Program.
The KPI reporting tool is a product of the Reliability Centered Maintenance program and
this research will produce the platform protocol intended to show the relative financial
gain/loss from using equipment components with different reliabilities.
1.3 Importance of the Problem
Every day, mining companies around the world make purchasing decisions about
every equipment component that is used in their operations. Because these purchasing
decisions are based primarily on the retail pricing of the components from different
manufacturers, the opportunity to understand the financial impact of those purchasing
decisions is lost. The problem is compounded by the fact that purchasing agents have
little awareness of that financial impact because they do not have a readily available
protocol to calculate that impact nor the data to use in that protocol. That is why it is
16
|
Colorado School of Mines
|
important for this research to produce this protocol to help establish well-structured
Reliability Centered Maintenance Programs. Only through the integration of those two
tools will optimized purchasing decisions occur. Certainly, the financial impact of reliability
will not be the only factor influencing the purchasing decision, but it should be one of the
main deciding factors in most cases.
1.4 Objectives of this Research
The objectives of this research are (1) to develop a protocol that will effectively
evaluate the true economic impact of equipment components from different
manufacturers relative to a mining operation’s cash flow, and (2) to assess if the utilization
of Reliability Centered Maintenance (RCM) to analyze equipment availability is a superior
way to determine the economic impact over conventional techniques.
1.5 Originality of this Research
A detailed literature review and an industry assessment have shown that there is
no such protocol or model being used in the mining industry today that can readily
establish the impact of purchasing components from different manufacturers and
suppliers will have on a mining operation’s cash flow.
1.6 Research Methodology
The methodology adopted for this research is quantitative, and could be defined
as a Causal/Comparative analysis. The hypothesis to be tested is that components with
lower reliability will generate a higher Net Present Value revenue loss. The methodology
compares the impact of independent variables or equipment availability (defined by a
relationship between the reliability KPIs - Mean Time to Repair and Mean Time to Failure)
will have on dependent variables, including a mining operation’s cash flow. The proposed
research methodology is a combination of methods used to evaluate reliability, mining
productivity, and value to a mining operator of mutually exclusive component alternatives.
These will all play a role in defining the best component choice for a piece of mining
equipment by identifying the key metrics that will have to be considered whenever a
17
|
Colorado School of Mines
|
component purchasing decision is needed. The key steps required to facilitate this
development are:
1) Develop the body of failure data for components of different Brands from a Reliability
Centered Maintenance or Management program,
2) Develop the KPIs for the components from the application of the RCM program,
3) Build the model (s) to calculate the economic impact of the KPIs
a. Analytical (Deterministic) Model,
b. Monte Carlo (Stochastic) Simulation,
4) Develop the KPI reporting structure,
5) Represent the revenue loss due to downtime to the mine operator in terms of a
Discounted Cash Flow Analysis,
6) Build these logical steps into a spreadsheet or a web application readily usable by
purchasing and operational managers, and
7) Select which model is most applicable to actual situations at mining operations in order
to calculate the economic impact.
Today, commercially available software deals with one or some of the steps outlined
above but not all of them simultaneously. This research will bring them altogether to
enable purchasing decisions to be made not only on price considerations but, more
importantly, on the reliability and economic impact of a component’s life cycle.
1.7 Thesis Organization
The thesis is organized in 8 chapters. A brief explanation of the content of each
chapter is presented below.
Chapter 1 - This chapter discusses the current strategies used by purchasing
departments in mining companies when buying mining equipment components and how
they can impact the cash flow at a mining operation. In addition, it describes three
applications of Reliability Centered Maintenance Programs at different mine sites,
examining their advantages and shortcomings. From these case studies, the importance
of the problem to the mining industry is discussed and research objectives are defined. It
18
|
Colorado School of Mines
|
concludes by expressing the originality of the research, demonstrates how the thesis is
organized, and sets the stage for the literature review in Chapter 2.
Chapter 2 – This chapter contains the literature review and current state in the four main
subjects that form the basis of this research. They include the maintenance policies of
reliability centered maintenance, concepts of applied reliability centered maintenance, the
application of Monte Carlo simulation, and the economic analysis of mutually exclusive
service-producing alternatives. The review shows that there is no reference or source that
brings all these subjects together to produce a platform or protocol that enables a mining
company to evaluate purchasing decisions of mining equipment components with
different reliabilities and the impact these different reliabilities will have on the cash flow.
This confirms the originality, the purpose of this chapter, and the importance of this
research. This chapter serves as the basis for building the model.
Chapter 3 – This chapter covers reliability engineering definitions, justifications, and
components. It then describes some of the mathematical concepts of reliability
engineering applied to this research, and discusses why it is important to increase
equipment reliability, particularly in mining. In addition, it presents an assessment of
reliability simulation strategies for analyzing and evaluating reliability and availability for
repairable and non-repairable systems commonly used in mining operations. Finally, it
makes the case for using Monte Carlo Simulation as a tool to be applied to build the
financial model under uncertainty dealing with the different reliabilities of equipment
components that will be described in detail in Chapters 4 and 5.
Chapter 4 - This chapter describes the Net Present Value definition and objectives and
goes on to present the revenue and cost factors associated with a Net Present Value
Analysis in Mining. It also demonstrates the development of the cash flow model with its
inputs and outputs. It then discusses how to determine an adequate discount rate for the
annual cash flows and introduces how to deal with risk in the analysis. Chapter 5 then
describes the application of these factors to the Discounted Cash Flow Analysis with
regards to risk and uncertainty and how they apply to the data analysis in this research.
19
|
Colorado School of Mines
|
Chapter 5 - This chapter discusses risk and uncertainties and their differences in capital
investment decisions. It then describes the mining related parameters to be considered
in the risk/uncertainty analysis. The chapter also includes an analytical probabilistic
analysis which is applied in the analytical model. Finally, this chapter introduces the
Monte Carlo Simulation process, which will be used in the research methodology. The
application of the Analytical and Monte Carlo approaches is discussed in Chapter 6.
Chapter 6 - This chapter discusses the data gathering methodology and considerations
about how the data is treated to represent two different brands of the same component.
It goes on to discuss the methodology, which is centered on building an Analytical Model
and a Stochastic Model based on Monte Carlo Simulation. The probabilistic approach
discussed in Chapter 5 is partially applied to the Analytical Model to show that a Maximum
Rate of Return cannot be calculated since there is no capital investment. The chapter
then discusses the application of the Monte Carlo Simulation method. Chapter 7
discusses in detail the application of both models to the two case studies obtained for
hydraulic hose assemblies and the results produced.
Chapter 7 – This chapter describes how the research data was generated and how similar
approaches can be implemented to generate reliability data for any mining equipment
component. It also discusses how to calculate an appropriate discount rate to be used in
the Net Present Value calculation. Moreover, the chapter shows that the data used in
both models must be treated differently to reflect the uncertainty inherent to some of the
parameters, which are deterministic in the Analytical Model and uncertain in the
stochastic (Monte Carlo Model). Finally, the chapter shows the results for both models
and discusses the implications on both data sets.
Chapter 8 – The chapter summarizes the methodology, results and discusses the general
model applicability for any components (not only for hydraulic hose assemblies), the
model’s limitations, and future research.
20
|
Colorado School of Mines
|
CHAPTER 2
LITERATURE REVIEW
This Chapter is a summary of a literature review performed in support of the
research objectives. The review focuses on Reliability Centered Maintenance, Applied
Reliability Centered Maintenance, Monte Carlo Simulation, and explains the evaluation
of the financial impact of different mining equipment component brands as it pertains to
maximizing the benefit of possible alternatives. This review did not find a model or
protocol that integrated these topics in a readily usable form necessary to make efficient
selection decisions between alternatives.
2.1 Mathematical Aspects of Reliability Centered Maintenance
Resnikoff (1970) attempted to produce a mathematical description of a Reliability
Centered Maintenance (RCM) program developed by United Airlines. His work allowed
greater scrutiny of its underlying principles and encouraged the broader application of
these complex systems to other industry sectors beyond commercial air fleet
maintenance. This description focuses on the equipment components and their mutual
relationships and interactions. A single-aircraft consists of tens of thousands of
interrelated parts whose integrated and harmonious operation is necessary for successful
aircraft operation. These systems and subsystems consist of sets of parts having many
elements. Harmonious operation also means that, at least for aircraft engineers, the
probability of failure, especially critical failures, should be less than one, which implies no
loss of life is acceptable. This means that maintenance policies in aircraft engineering,
seeking to eliminate the possibility of loss of life also restrict the production of data that
needs to be used for the generation of the maintenance policies in the first place. The
reason is because the aircraft design aims at preventing failures, thereby reducing the
possibility of accidents and the production of failure data. Despite the inherent uncertainty,
maintenance policy designers do have the advantage of experience gathered from the
operation of previous generations of aircraft. Although those aircraft are different, both in
the design and fabrication of many of their constituent parts and in the relationships
among those parts, many constituents are unchanged, and most changes are minor from
one design to the next. This continuity is probably the single greatest source of help and
21
|
Colorado School of Mines
|
information available to maintenance engineers while designing aircraft maintenance
policies. Kolmogorov (1941) and Wiener (1958) were among the first to recognize the
close relationship between statistics and information, particularly with regard to
communication theory as put forth by Resnikoff (1970). Shannon (1948) expanded and
developed these concepts to create a useful information theory. The application of
Shannon's theory on maintenance policies requires that both the concepts of information
and reliability consider the sets of components any equipment is made of, and the
operational functionality defined for those sets. The collection of items in a piece of
equipment is considered a set with an associated survival distribution. This set is broken
down into independent elements with its own cost function that includes the direct and
indirect estimated costs of a failure, plus all the costs associated with the maintenance
program under consideration (element or part acquisition cost, labor, etc.). From an
operations perspective, the objective is to minimize the sum of these cost functions,
where any reduction in the cost functions of one independent element will reduce the
global cost function of all elements provided there are no changes to the other elements’
cost functions. Hence, iteration of this process of local cost reduction for all elements in
a set will lead, in the limit, to a local minimum of the total cost function for that set.
Resnikoff (1970) indicated there is no way to prove that this local minimum will be the
global minimum, nor is there an analytical way to estimate or accelerate the rate of
convergence to the local minimum. Resnikoff (1970) defined the cost function for a set
of elements of a system S as:
(2.1)
t
Where: C (t) = ∑λ∈ΛC (t) = ∑λ∈Λ ∫0γ (t)dF (t)
λ λ λ
C(t) = Maintenance/failure cost function for the complex system,
Λ = Set of Elements of a system S,
λ = Typical element of the set of system S,
(t) = Maintenance/failure cost function for the element λ of set Λ of the
complex system S,
𝐶𝜆
(t) = Maintenance/failure cost density of element λ of a set Λ of system S with
respect to the failure distribution .
𝛾𝜆
F (t2)2
λ
|
Colorado School of Mines
|
Equation 2.1, where ϒ (t)≥0 and C(t) is a finite sum over all the elements of the partition
λ
Λ, can be described as the integral
(2.2)
t
whose ∫i 0n γteλg (tr )a ρnλd (s
t)
da tre non-negative functions and the maintenance/failure cost density
and the failure distribution of an element λ, respectively. Consequently, the total cost C(t)
through time t will be reduced if one or more of the following three possibilities occur
(Resnikoff, 1970):
I. For some λ ϵ Λ there is a maintenance policy which replaces the failure cost density
by a failure cost density such that:
f f ∗
cλ(t) ≤ for all t and cλ(t)
f ∗ f
cλ(t) < cλ(t) for t in some open interval
f ∗ f
cλ(t) cλ(t)
II. For some λ ϵ Λ there is a maintenance policy which replaces the maintenance cost
function by a maintenance cost function such that:
m m ∗
≤ cλ,i(t )for all t and cλ,i(t)
m ∗ m
cλ,i(t) < cλ,i(t) for t in some open interval
m ∗ m
cλ,i(t) cλ ,i(t)
III. For some λ ϵ Λ there is a maintenance policy which replaces the product
by a product such that:
γλ(t)ρλ(t)
∗ ∗
≤ γλ(t)ρλ( tf )or all t and
∗ ∗
γλ(t)ρλ(t)< γλ(t)ρλ(t) for t in some open interval
∗ ∗
γλ(t)ρλ(t) γλ(t)ρλ(t)
Maintenance policies of Type I occur when a component is redesigned to incorporate
redundancy or other fail-safe design methods that reduce the cost of failure of the
component without necessarily affecting its probability of failure. This type of policy tends
to apply modifications in equipment design rather than in maintenance procedures. Type
II policies are indifferent to survival distributions, and therefore, are independent of the
properties of the equipment being maintained. They are principally managerial or
23
|
Colorado School of Mines
|
organizational policies concerned with issues such as scheduling of periodic maintenance
tasks, the location of maintenance depots, and/or provisions for storage of replacement
parts in adequate number to reduce downtime revenue loss while avoiding costs
associated with excessive replacement parts inventories. Optimal Type II policies can be
difficult to identify and implement, but managers and cost accountants have always
understood their nature and importance. Nevertheless, the large costs of critical failures
will not be counterbalanced by efficiencies from Type II policies because they will not
affect the survival distribution or the cost of failure. The most significant opportunities for
the introduction of maintenance policies that reduce C(t) are of Type III, which can be
further categorized into three subtypes. Using the notations and constraints given in (III)
above, they can be expressed as follows:
IIIA. ≥ and ≤ for all t;
∗ ∗
IIIB. γλ(t) γλ(t) and ρ λ(t) ≥ ρλ(t) for all t;
∗ ∗
IIIC. γIn λt (e t)rv ≤al s γ wλ(h t)ere ρλ(< t) ρλ( ta )nd >
∗ ∗
ρλ(t) ρλ(t) ρλ(t) ρλ(t)
In IIIA, the reduction in the probability of failure density may result in an increase in
maintenance costs. For example, purchases of more expensive components may result
in better equipment reliability and longer life under similar operating conditions. If a failure
of the component is critical with a corresponding large cost , and it fails less
f
frequently, the product will generally be reduced, often by a substantial amount.
cλ(t)
Maintenance policies of this type correspond to situations where additional investment in
γλ(t)ρλ(t)
more reliable components that are associated with large failure costs results in a
significant decrease in the failure density or frequency. Essential for the effective
introduction of Type IIIA maintenance policies is an evaluation of failure modes and the
consequences of failure. Based upon this information, maintenance policies of Type IIIA
can reduce C(t) by reducing unscheduled failure events. In addition, the introduction of
inspections and monitoring activities may also increase maintenance costs, but are
usually offset by the reduction in the frequency and potentially high cost of unexpected
failures. This is particularly applicable to mobile mining equipment and hydraulic hoses,
where any hydraulic hose failure causes operation of the equipment to cease. Resnikoff
24
|
Colorado School of Mines
|
(1970) explains “policies of Type IIIB are particularly effective when applied to items with
non-significant cost of failure. They decrease the cost density while possibly
increasing the failure density in a manner that decreases the value of the product of
γλ(t)
these two functions. If the failure of an item is not significant, this means that the failure
ρλ(t)
cost density reduces to the cost of replacing the failed item. If this is less than the
f
cost of maintenance over the lifetime of the item, then the cost density product is reduced
cλ(t)
by implementing this policy.” Overhauling seat recliners on an airplane is an example of
such a policy. Periodical overhauls could become costly compared with the imputed cost
of a recliner failure. In this example, continued overhauls would increase the replacement
or failure density, so applying condition monitoring instead of replacing seat recliners
would reduce and thus C(t). In policies of Type IIIC, neither the cost densities nor
the failure densities exhibit linearly decreasing behavior as time increases, but the policy
cλ(t)
does achieve an overall cost reduction over each iteration of the application of the
maintenance policy. It is clear that, for the purposes of this research, the most
representative maintenance policy is Type IIIA.
2.2 Applied Reliability Centered Maintenance
Reliability Centered Maintenance (RCM) is a maintenance paradigm that seeks an
understanding of an operator’s goals, needs and equipment to develop a maintenance
strategy that will optimize the operator’s desired outcomes (August,1999). It breaks down
silos between operations and maintenance requiring their open and constructive
collaboration. Early in the 20th century, due to the application of organizational models
such as Total Quality Management (TQM), several American companies led the world in
manufacturing efficiency and created a strong demand for American made goods. As skill
levels and standards of living changed, the workforce changed as well. Inflation in the
1970s also caused cultural and social change while reducing the availability of capital.
The energy crisis developed and global competition increased. With the rise in
manufacturing capability in other countries, such as Japan, American industries were put
on the defensive, where their industrial hegemony ended. The United States (U.S.) was
still the leader in international trade but, as the century progressed, it became clear that
existing processes would have to change in order to improve U.S. global competitiveness.
25
|
Colorado School of Mines
|
RCM appeared in this context connecting reliability engineering and the workplace,
creating production focus. At least 10 different commercially available software packages
currently allow users to perform RCM. RCM is also referred to by other names such as
Preventive Maintenance Optimization (PMO), where there are competing versions of
RCM. They all aim at continual cost reduction and the development of the ability to identify
and eliminate low value work at the system and equipment level. RCM-minded
organizations do not just operate plants/equipment; they improve plant/equipment
operations. The aforementioned work also provides many of the concise RCM terms,
such as condition monitoring, maintenance task, logic tree analysis, and failure finding.
Spin-off benefits included the development of Failure Mode and Effect Analysis (FMEA),
Fault Tree Analysis and Management System (FTA), and Management System Failure
Analysis and Management Oversight Risk Tree (MORT) (August, 1999). By the late
1950s, preventive maintenance (PM) had developed into a competitive management
philosophy. Its premise is that failures should not occur if equipment components are
properly maintained and replacement scheduling is usually based on a time frame
specified by the equipment manufacturer, hence, the name Time Based Maintenance
(TBM). Significant technology developments lowered component life-cycle costs, allowing
them to be replaced before they failed, thus leading to another important maintenance
concept called Predictive Maintenance (PdM). Most maintenance practitioners and
managers embraced PdM applications, such as vibration monitoring, oil sample analysis,
multi-channel analyzers, and remote telemetered data. Regulators also saw the appeal
of PdM applications, such that they sometimes mandated their use (August, 1999).
Conversely, a large proportion of failure modes are not age-related, because conducting
age-related maintenance (PM) may not make technical sense. For such failures, PdM is
the preferred solution provided that the correct diagnostic models are used and the ability
to interpret the data exists. RCM also helps define how often readings should be taken.
In this context, RCM focuses on the system of components, component failure
classification by modes, assessment of failure modes, and numerical and statistical data
evaluations of large equipment populations. In addition, Monte Carlo Simulations, in an
inexpensive way, help to generate Mean Time Between Failures (MTBF) and Mean Time
to Repair (MTTR) for different components based on lab or field data. This RCM focus
26
|
Colorado School of Mines
|
breaks silos between operations, maintenance, and engineering by establishing KPIs that
measure the performance of all three functional groups in an industrial organization. While
traditional RCM includes a rigorous task selection methodology or process that follows
detailed flow paths to document decision making, August (August, 1999) defines Applied
RCM (ARCM) as a methodology that simplifies and summarizes the results of task
selection, supports standardization, and identifies general maintenance strategies.
Applied RCM prevents functional losses by managing failures and leads to process
improvement. The overall objective is to meet mission goals, usually measured in terms
of cost, safety, and risk.
2.3 The application of Monte Carlo Simulation (MCS) to establish Time Between
Failures
Mobile mining equipment can have many systems of components (hydraulics,
transmission, engine, etc.). Each component can be in a number of states, such as
working, failed, standby, etc. During its life, a component may move from one state to
another by a step which occurs stochastically in time and whose outcome (new state
reached) is also stochastic (Zio, 2013). The full description of the system’s random and
uncertain behavior in time is given by the probability that the system steps, at a given
time, from the current state to a new state. The states can then be numbered by an index
that sequences all the possible combinations of all the states of the components of the
system. More specifically, let k be the index that identifies the configuration reached by
n
the system at the n-th step and t be the time at which the step has occurred. Then
n
consider the generic step which has occurred at time t’ with the system entering state k’
as shown in Figure 2.1. The probabilities that govern the occurrence of the next system
step at time t which lead the system into state k are (Zio, 2013):
“T(t|t’,k’)dt which is the conditional probability that the system makes the next step
between t and t+dt, given that the previous step has occurred at time t’ and that the system
has entered state k’.
27
|
Colorado School of Mines
|
Figure 2.1 Random Step or System Transition (t’,k’)(k,t) (Zio, 2013)
C(k|k’,t) which is the conditional probability that the system enters in state k as a
result of the step occurring at time t when the system leaves state k’. The probabilities
defined above are normalized as follows:
(2.3)
∞ ′ ′
and
∫𝑡′ 𝑇(𝑡|𝑡 ,𝑘 )𝑑𝑡 ≤ 1
(2.4)
′
∑𝑘∈𝛺𝐶(𝑘|𝑘 ,𝑡) = 1; 𝐶(𝑘|𝑘,𝑡) = 0
where is the set of all possible states of the system. Note that T(t|t’,k’) may not be
normalized to one since with probability 1- , the system may fall at t’ in an
𝛀
′ ′
absorbing state k’ from which it cannot exit due to the zero probability of a new step.
∫𝑇(𝑡|𝑡 ,𝑘 )𝑑𝑡
Equations 2.3 and 2.4 describe how the step (t’,k’) (t,k) times are stochastically
sampled for any step described as the product of the conditional probability that the
system makes the next step between t’ and t = t’+dt and the conditional probability that
the system enters in state k at time t after the system leaves state k’.
K(t,k/t’,k’) = T(t|t’,k’)C(k|k’,t) (2.5)
The analysis of system reliability using MCS involves performing an experiment with
many stochastic systems, each one behaving differently from the other. These systems
are tested for a given time and their failure occurrences are recorded (Dubi, 1999). This
is the same procedure adopted in the reliability tests performed on individual components
28
|
Colorado School of Mines
|
to estimate their failure rates, mean times to failure, or other parameters characteristic of
their failure behavior. Many failure tests can be done in a laboratory; at acceptable costs
and within reasonable time frames. The problem arises when it is desired to assess the
failure behavior of equipment components in field applications. Thus, instead of making
physical tests on a component, its stochastic step amongst different states is modeled by
defining its conditional probability that the system enters in state k at time t when the
system leaves state k’ as seen before. Many states are generated by sampling from the
times and outcomes of the steps. Figure 2.2 shows a number of these states on the plane
State vs. Time. The vertical axis shows the various states a system or component can be
in and the horizontal axis shows time. In such a plane, the states take the form of random
straight segments parallel to the time axis. Vertical random steps to new system states
occur at random times. For reliability analysis, a subset of the system states is identified
as the set of fault states. Whenever the system enters one such state, its failure is
𝜏
recorded together with its time of occurrence. With reference to a given time t of interest,
an estimate ḞT(t) of the probability of system failure before such time (i.e., the unreliability
(t)) can be obtained (Zio, 2013). This unreliability is defined by dividing the number of
T
random steps that register a system failure before t by the total number of random states
𝐹̂
generated. At time when the system enters a failure state, a one is collected into all the
unreliability counters CR(t) associated to time t successive to the failure occurrence time,
𝜏
i.e., t ϵ | ,T M as shown in Figure 2.3 (Zio, 2013). After simulating a large number of
random states M, an estimate of the system unreliability can be obtained by simply
𝜏
dividing by M, the sum of the contents of the counters CR(t), t ϵ[0,T M]. The counters CR(t)
produce the number of failures, and with the time the system was in operation during the
simulation, a simple division defines the Mean Time Between Failures.
29
|
Colorado School of Mines
|
This analysis is also valid when purchasing components for mining equipment. Mining
operations should buy equipment components only after a systematic performance
review of these components has been performed. This review should ultimately lead to a
financial analysis of this performance. In this context, incremental discounted cash flow
analysis is the correct model to decide which component should be purchased.
Incremental analysis will define which component will provide the best Rate of Return
(ROR) and Net Present Value (NPV) from an operational perspective. Incremental
analysis then defines if the extra cost of one alternative will generate more or less profit
(or savings) than another. In particular, ROR analysis for mutually exclusive alternatives
is based on testing to see that each satisfactory level of investment meets (1) the rate of
return on total individual component investment that must be greater than or equal to the
minimum rate of return, “i*” and (2) that the rate of return on incremental investment
compared to the last satisfactory level of investment must be greater than or equal to the
minimum ROR, “i*” (Stermole, 2012). The component that meets both tests should be the
economic choice. The other requirement is the NPV analysis. It determines that the net
value on total individual component investment must be positive and that the incremental
net value of the higher priced component against the less expensive component must be
positive. The alternative with the largest positive net value is the economic choice. In
addition, it is apparent that the less reliable component will cause a greater revenue loss
than the more reliable component and that should be considered in the analysis. The cost
differential between the components should also be considered. The literature review
indicates that no financial model exists relating maintenance policies, reliability, system /
component random behavior, applied concepts of reliability and financial analysis to help
a mining company better select and purchase one equipment component over another.
This research aims to fill this gap. The next chapter focuses on reliability engineering and
how it is applied to provide some of the parameters used in building the financial model.
32
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.