University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Colorado School of Mines | data that takes several days to compute is not that much better than a 99% solution computed in
a few minutes in most applications. Therefore, just as optimization in general must be applied
judiciously with adequate understanding of any limitations so too must the solution methodology,
exact or heuristic, be selected with understanding of the relevant trade-offs.
Ad-hoc heuristic methods are customized specifically for a particular real world problem and
optimization model. They are developed and implemented independently for a particular problem
with a narrow range of input differences. An ad-hoc approach can take advantage of specific
problem attributes that may admit generating feasible solutions or improving solutions efficiently.
They may incorporate more general approaches such as greedily selecting the current best known
value for a particular variable or performing a basic local search that explores ‘nearby’ feasible
solutions before selecting the best outcome. However, they will always only be usable for the
problem for which they were designed, and although they admit high quality solutions for a
specific problem they can be difficult and costly to implement.
Meta-heuristic approaches instead rely on some higher level strategy, which may have been
inspired by some natural process or at least guided by some high level understanding of
optimization. Most meta-heuristics, and most ad-hoc approaches, fall under the general paradigm
of generating or searching through many feasible solutions and selecting the best solution as the
final answer. Where meta-heuristics differ is that they provide a strategy and criteria for guiding
that search to be as effective as possible.
Simulated annealing is a meta-heuristic inspired by the natural process of annealing whereby
particles arrange themselves into high strength configurations during a slow cooling process
[126, 131, 132]. The general idea is to take a current solution modify it through some appropriate
means and compute the change in objective value. If the change is an improvement, for example
the objective value increases when maximizing, the change is accepted and incorporated into the
solution for the next iteration. However, if the change would not improve the objective it is only
accepted with some decreasing probability which corresponds with the ‘temperature’ analogue in
the overall process. Incorporating non-improving moves into the optimization process allows for
more solutions to be explored, potentially incorporating solutions that ‘break out’ of local optima.
Genetic algorithms fall within the broader group of evolutionary meta-heuristics which
maintain a larger population of possible solutions and provide operations for combining and
58 |
Colorado School of Mines | CHAPTER 3
AN IMPROVED ULTIMATE PIT SOLVER – MINEFLOW
A fundamental component of open-pit mine planning is the ultimate pit problem, where the
undiscounted value of an open-pit is maximized subject to precedence constraints. It is used for
many different purposes, as discussed in Section 2.2, including as a subproblem in the
optimization procedures described in Chapters 4 and 5. In all circumstances it is preferable to
compute the provably optimal results as quickly as possible, and in many circumstances it is
advantageous to be able to modify block values and recompute the ultimate pit without having to
start everything from the beginning. For these reasons, and to facilitate future research efforts in
open-pit mine planning, a fast, extensible, open-source, ultimate pit solver named MineFlow is
developed in this chapter. MineFlow, at its core, is a specialized and customized implementation
of Hochbaum’s pseudoflow algorithm specifically for use with mining problems.
Section 3.1 describes the pseudoflow algorithm in detail and customizes it specifically to the
ultimate pit problem. Hochbaum’s pseudoflow algorithm solves the more general max-flow
min-cut problem and must contend with a few complexities that are not present in the ultimate
pit problem. Removing that unnecessary complexity from the algorithm and taking advantage of
the ultimate pit problem’s special structure allows for a faster implementation. Additionally, this
section describes a novel notation for the pseudoflow algorithm which helps to make it easier to
understand and communicate to new researchers and practitioners.
Section 3.2 expands on several of the important implementation details which serve to make
this implementation much more performant than available alternatives. Specifically the
importance of lazily generating precedence constraints, using the minimum search patterns from
Caccetta and Giannini, and other details are discussed.
Finally, Section 3.3 presents a computational comparison which highlights the tangible
benefits of the theoretical and practical improvements developed in this chapter. This approach
uses less memory and less computer time than currently available commercial implementations of
the pseudoflow algorithm and computes identical results.
60 |
Colorado School of Mines | 3.1.1 Network Preliminaries
A network, or graph, is a mathematical structure which models pairwise relationships between
objects. Networks are used to represent things which are both abstract and concrete. Many
problems become simpler when thought of through the lens of networks because turning a real
world problem into a network requires one to think cogently about which of the components of
the problem can be combined and represented as nodes and then how best to model the
relationships between those components with directed or undirected arcs [49].
Arcs are used to represent pairwise relationships between nodes. A directed arc between two
nodes indicates that the arc has a special orientation. The beginning and ending nodes of a
directed arc are called the tail and head respectively. An undirected arc between two nodes does
not have an order and only indicates that a relationship exists between the nodes.
A sequence of arcs traversed in any direction between two different nodes is a path. Paths are
defined such that they only go through nodes and arcs at most once. If a network is constructed
such that any two nodes are connected by exactly one path it is called a tree, an example tree is
shown on the left in Figure 3.1. Often trees have a special node designated the root node from
which there could be many sub trees or branches.
In the ultimate pit problem, blocks are represented as nodes and precedence constraints as
directed arcs. This is a special kind of network called a directed acyclic graph. Acyclic means that
the network does not contain any directed cycles. Acyclicity is inherent in precedence graphs
because each block only depends on blocks that are above them in elevation.
A closure of a directed network is a set of nodes such that there are no arcs with their tails
inside the closure and their heads outside. In the ultimate pit problem all closures of the network
are valid pits because the restriction on directed arcs ensures there are no precedence violations.
An example closure is shown on the right in Figure 3.1. The ultimate pit, therefore, is the
smallest maximum valued closure of the precedence graph.
Networks are used to model many different real-world problems, one of special interest here
are network flow problems. In a network flow each arc has a maximum allowed capacity and an
associated flow. Two special nodes are identified as the source, denoted with a S, and the sink
with a T. Flow originates at the source node and terminates at the sink node. Usually, every
62 |
Colorado School of Mines | other node must satisfy a flow-balance constraint which requires that the amount of flow into the
node be equal to the amount of flow leaving. Network models can be used to model fluids in
pipes, power in an electrical grid, traffic on roads, and other similar things [49]. In the context of
ultimate pit analysis the flows on arcs can actually be thought of as flowing money. The flow
corresponds to money moving around and paying for the extraction of necessary blocks. This
analogy is expanded upon and justified in future chapters. A small example network flow model is
shown in Figure 3.2.
c
closure
b
undirected
arc head
a’s
descendants directed
arc
a
tail
path from c
to root
root
Figure 3.1 Left: a network which is a tree with associated terminology. Right: a network which is
a directed acyclic graph. Figure adapted with permission from Deutsch, Da˘gdelen, and Johnson
2022 [56]
In Figure 3.2 the current flow from the source to the sink is four units, however it is possible
to route additional flow through this network. The bolded arcs through the middle of the network
can carry one additional unit of flow, and the path along the top could take an additional two
units. If both paths were saturated the flow for this network would be seven, which is the
maximum flow.
In a network there are many ways to cut the network into two pieces. If the partitions are
organized such that one side contains the source and the other side contains the sink then these
two sets are called an s-t cut. In an arbitrary cut the arcs that cross from one partition to the
other are said to be a part of the cut-set. However, in s-t cuts only arcs going from the source
side to the sink side are included.
63 |
Colorado School of Mines | a
2/4 2/6
S 0/1 T
2/5 2/2
b
Figure 3.2 An example network flow model. Source S and sink T nodes are labeled. Numbers on
arcs indicate ‘flow’ / ‘capacity’. The bolded arcs show a possible augmenting path. Figure
adapted with permission from Deutsch, Da˘gdelen, and Johnson 2022 [56]
For the network in Figure 3.2 there are four possible s-t cuts as shown in Figure 3.3. Note
that the cut-set corresponding to cut 4 only consists of two arcs despite appearing to go through
three. This is because the middle arc, from b to a, goes from the sink side to the source side.
X X X X
X a a X a a
X X
4 6 4 6 4 6 4 6
S 1 T S 1 T S 1 T S 1 T
5 2 5 2 5 2 5 2
b b b b
Cut 1: 9 Cut 2: 8 Cut 3: 7 Cut 4: 11
Figure 3.3 The four different possible s-t cuts for the network in Figure 3.2. Numbers on arcs are
the arc’s capacity. The cut-set arcs are bolded and the total cut capacity is shown below each cut.
Figure adapted with permission from Deutsch, Da˘gdelen, and Johnson 2022 [56]
The capacity of an s-t cut is the sum of the arc’s capacities in its cut-set. The capacity for
each possible cut in Figure 3.3 is given in Table 3.1.
Table 3.1 The source set, sink set, cut-set, and capacity of the four possible cuts for the graph in
Figure 3.2.
S-T cut X X¯ Cut-set Capacity
1 S a,b,T (S,a),(S,b) 9
{ } { } { }
2 S,a,b T (a,T),(b,T) 8
{ } { } { }
3 S,b a,T (S,a),(b,T) 7
{ } { } { }
4 S,a b,T (S,b),(a,T) 11
{ } { } { }
64 |
Colorado School of Mines | It so happens that the maximum possible flow through a network is equal to the capacity of
the minimum cut. This is formalized in the max-flow min-cut theorem. Intuitively, the arcs in the
cut-set of the minimum cut correspond to the ‘bottle-neck’ of the network flow model. There is
no way to fit more flow through the cut-set without increasing its capacity, and if there was some
other path around the bottle-neck then it would not be a valid s-t cut.
Most formal proofs of this theorem confirm the above intuition by showing that if the max
flow did not equal the minimum cut there would be a contradiction. A very early discussion of
this theorem appears in Ford and Fulkerson 1962 [50].
3.1.2 Pseudoflow Preliminaries
Hochbaum devised the pseudoflow algorithm to solve the general max-flow min-cut problem.
In order to use the pseudoflow algorithm for the ultimate pit problem, the ultimate pit problem
must first be transformed into the source-sink form described in Section 2.2.3. In brief; a directed
arc with capacity equal to the economic block value is connected from the source to each
positive-valued block, and a directed arc with capacity equal to the absolute value of the block’s
economic block value is connected from each negative block to the sink. Finally, arcs with infinite
capacity are connected for every precedence constraint. Applying a suitable max-flow min-cut
algorithm to this network identifies the ultimate pit as the source set.
The pseudoflow algorithm is very similar to the Lerchs and Grossmann algorithm and was, in
part, inspired by it [46]. Both algorithms operate iteratively by selecting a violated precedence
constraint, enforcing it, and then adjusting necessary information for the next iteration. Where
the pseudoflow algorithm deviates from the Lerchs and Grossmann algorithm is that it leverages
the idea of flow instead of mass, it provides some machinery for selecting which precedence
constraint to introduce, and maintains different structures which are easier to update efficiently.
At each iteration the pseudoflow algorithm maintains a pseudoflow on the network and a
special structure called a normalized tree. A pseudoflow is a relaxed flow where nodes are not
required to satisfy the flow balance constraints described in Section 3.1.1. Nodes which have more
inflow than outflow are said to have an excess and nodes with more outflow than inflow have a
deficit.
65 |
Colorado School of Mines | The normalized tree is a subset of arcs from the network such that there is exactly one unique
path from each node to either the source or the sink node. The tree remains a tree for the entire
algorithm so any changes to the tree require adding and dropping arcs simultaneously.
‘Normalized,’ in this context, requires that only nodes which are immediately adjacent to the
source or sink nodes are permitted to carry excesses or deficits.
Here a departure from the conventional description of the pseudoflow algorithm is taken
regarding the so called main root of the normalized tree and the source and sink nodes.
Hochbaum often combines the source and sink nodes into a single node called the main root,
whereas here they are left separate as two distinct S and T nodes. This is done for several
reasons. Firstly, it reinforces the max flow nature of the problem where one can imagine flow, in
this case ‘money’ or ‘value’, traveling from the source to the sink. Secondly, it is much easier to
draw and keep the arcs associated with positive and negative blocks from crossing and getting in
the way of one another. In most real applications of the ultimate pit problem the negative valued
blocks (waste) are at higher elevations than positive valued blocks (ore), so it is beneficial to
imagine the source at the bottom and the sink at the top. This does have the unfortunate effect
of making the normalized tree appear to be disconnected and not much like a tree, however when
reasoning about the tree either consider both the source or the sink nodes as valid main roots or
imagine an extra tree arc between the source and sink nodes making it a true tree.
The nodes which are immediately adjacent to either the source or the sink are the only nodes
that can have some excess or deficit and are called roots (not to be confused with the main root
terminology used by Hochbaum). Roots with excesses are said to be strong, and all of the nodes
within their respective subtrees are also strong. Roots which satisfies the flow balance constraint,
or ones that have a deficit, are said to be weak, and all of the nodes within their subtrees are also
weak.
Finally each node in the pseudoflow algorithm has an associated label which is a non negative
integer used to preclude certain sequences of merging operations that can negatively impact
performance. Labeling is not strictly necessary for the operation or correctness of the pseudoflow
algorithm but was included as a performance optimization. Therefore labels are not included in
the notation nor the algorithm description in Sections 3.1.3 to 3.1.5, but are discussed separately
in Section 3.1.7.
66 |
Colorado School of Mines | 3.1.3 Pseudoflow Notation
Figure 3.4 introduces the new notation developed for the pseudoflow algorithm. This notation
makes it far easier to understand how the pseudoflow algorithm operates and keep track of
progress when completing iterations by hand. It is true that experienced practitioners will
generally not perform pseudoflow steps manually and this notation will never be applied to
problems that are even approaching a realistic size. However, similar to the way that the tableau
method is useful to novice researchers learning about the simplex algorithm, this notation is
useful to novice researchers learning about the pseudoflow algorithm.
On the left in Figure 3.4 the numbers on arcs are the current flow of the arc. If the arc does
not have a number, then the flow is zero. The numbers inside nodes are the current excesses
(when positive) and deficits (when negatives). If a number is omitted inside a node then it is zero
and this means that the node satisfies the flow balance constraint.
In this notation the capacity is not indicated on any arc. This is because the capacities of all
precedence arcs, arcs which do not connect to either the source or the sink, are infinity. And the
flow along the value arcs, arcs which are connected to either the source or the sink, are always
kept at their maximum capacity.
T T
2 4
2 2
5 -2 -3
excess deficit strongroot weakroot
2
7 1
flow treearc
7 3
strongnode weaknode
S S
Figure 3.4 Left: The maintained pseudoflow on the flow network is notated with numbers on arcs
for the flow, and numbers within nodes for excesses or deficits. Right: Thick arcs are a part of the
normalized tree, and dotted arcs are not. Gray nodes are strong, white nodes are weak. Figure
adapted with permission from Deutsch, Da˘gdelen, and Johnson 2022 [56]
This is one of the main departures from the conventional pseudoflow algorithm incorporated
into this description. The conventional approach must keep track of, and continuously check,
capacity information on all arcs because they are generally not the same value nor are they all
67 |
Colorado School of Mines | infinite. Additionally, in the conventional pseudoflow algorithm the flow on source and sink
adjacent arcs is modified during the second stage of the algorithm where the maximum flow is
recovered from the minimum cut - however this is not required. Once the minimum cut is
identified so too is the ultimate pit and no additional work is necessary as the flow values are not
the main goal in this application.
The normalized tree is notated by making its member arcs bold, and the arcs which are not a
part of the normalized tree dotted or dashed. This is shown on the right in Figure 3.4. Nodes
which are roots are notated with a double circle and contain either an excess or a deficit. Strong
nodes are shaded gray and weak nodes are left unshaded. These two pieces of notation are then
superimposed on top of one another during the execution of the algorithm.
3.1.4 Initialization
The first step in the pseudoflow algorithm is to construct an initial normalized tree and an
initial pseudoflow. It is possible to start the pseudoflow algorithm from any normalized tree with
a valid pseudoflow, but the simplest starting point is to fully saturate all source and sink adjacent
arcs, and include all source and sink adjacent arcs in the normalized tree as in Figure 3.5. This
creates an excess on each positive valued block, and a deficit on each negative valued block.
T
2 4
2 2
c d e f
-2 -2 -2 -4
-2 -2 -2 -4
7 3 7 3
a b
7 3
S
Figure 3.5 Left: The input ultimate pit problem. Right: The initial normalized tree, letters near
nodes are the node names and not a part of the notation.
The slight modification to how capacity is handled in this implementation requires the
pseudoflow to be initialized with maximum flow on the source and sink adjacent arcs. The nodes
must also be initialized with a valid label, it is sufficient to set the label of the strong nodes to one
and the label of the weak nodes to zero.
68 |
Colorado School of Mines | 3.1.5 Algorithm Steps
At each iteration of the pseudoflow algorithm an arc corresponding to a precedence constraint
between a strong node and a weak node is selected to be the merger arc. This merger arc is then
introduced into the normalized tree and the arc between the strong root and the source or sink is
removed. The pseudoflow is then adjusted so that the tree remains normalized. This requires
adjusting flows along the path from the strong root to the week root, and is called a merge.
During this merging process there may be tree arcs which must be removed from the normalized
tree resulting in new branches with new associated roots which is called a split. Finally, when
there are no more precedence arcs between strong and weak nodes the algorithm terminates and
all the remaining strong nodes constitutes the ultimate pit.
There is no need to continue with the flow recovery step present in the original pseudoflow
algorithm, because the ultimate pit is the sole goal of this approach. Recovering the max flow also
does not have any side effects which could aid in re-initialization or solving similar problems.
In practice choosing which merger arc to use has a large impact on the performance of the
algorithm. The pseudoflow algorithm uses a labeling scheme, discussed further in Section 3.1.7.
These labels help guarantee reasonable performance and avoid several of the problems which
plague the Lerchs and Grossmann algorithm. In this section, however, the labels are not used so
that the small example demonstrates all of the necessary components of the pseudoflow algorithm.
Once the appropriate merger arc is chosen the algorithm identifies the strong root associated
with the underlying strong node, and the weak root with the overlying weak node. The strong
root will, by definition, have some positive excess. The normalized tree must then be updates as
in Algorithm 2.
One of the departures from the conventional pseudoflow algorithm occurs during the walk
from the strong root to the weak root. If the arc is directed in line with this path the flow along
the arc must be increased by the current δ which intuitively corresponds to using currently
available funds to pay for overlying negative valued blocks. These funds will then be ‘spent’ by
the negative valued blocks by directing flow to the sink. In the more general case it would be
necessary to ensure that this increase in flow does not lead to a capacity violation, however in the
context of solving solely for the ultimate pit this check is not necessary because all of these arcs
69 |
Colorado School of Mines | Algorithm 2: The merge procedure in the modified pseudoflow algorithm, adapted from
Hochbaum 2001 [46]
// Update the normalized tree
Remove the tree arc connected to the strong root;
Add the merger arc;
δ the excess of the strong root
←
for all of the arcs along the path from the strong root to the weak root do
if the arc is directed in line with the path then
Increase the flow on the arc by δ;
else
// Try to decrease the flow on the arc by δ
if the flow is greater than δ then
Set the flow the current flow less δ;
else
Split flow on this arc;
// See following Algorithm for details on splitting
are precedence constraints with infinite capacity.
During the merge operation if an arc is oriented opposite to the direction of the path from the
strong root to the weak root and the current flow on that arc is less than the currently available
excess, δ, a split is required. If this happens it means that at some stage earlier in the algorithm a
strong root became weak after supporting some of the nodes within the current strong root’s cone
of influence. Therefore the weak nodes which are currently being merged with the current strong
root had, at one point, an underlying positive block supporting them and they may be connected
inappropriately for this new step. The tree must be split into two subtrees at this location so that
we avoid configurations where unnecessary negative valued blocks are included in a strong
subtree. This allows for the two valid outcomes: the subtree remains weak and is correctly
excluded from the ultimate pit, or the positive values blocks within the subtree are sufficient to
support the negative valued blocks above them with the new ‘help’ following the merge.
At later stages of the algorithm, after many merge operations, there may be several splitting
operations during the course of a single merge operation. The splitting operation follows in
Algorithm 3.
When there are no longer any precedence arcs between strong nodes and weak nodes the
algorithm terminates. Any nodes which are still classified as strong are then the ultimate pit, and
the sum of their excesses is the ultimate pit value.
70 |
Colorado School of Mines | Algorithm 3: The split procedure in the modified pseudoflow algorithm, adapted from
Hochbaum 2001 [46]
δ the flow along the splitting arc ;
←
The flow along the splitting arc 0;
←
// Note that this leaves a positive excess at the head node
// Update the normalized tree
Remove the split arc;
Add the arc from the head node to the root;
Continue with the merging operation using the new δ;
3.1.6 Example
This example serves to illustrate both the algorithm and the notation developed herein. The
dataset for the example is very small, consisting of only six blocks. In order to force a splitting
operation a specific sequence of merge steps are required as shown on the left in Figure 3.6.
T T
2 4 2 4
2 2 2 2
c d e f c d e f
-2 -2 -2 -4 -2 -2 1 -4
4 5 1 3
3 2
7 3 7
a b a b
7 3 7 3
S S
Figure 3.6 Left: The starting network, circled numbers on nodes indicate the order of merging
arcs in this example. Right: The result of the first merge between nodes b and e. Figure adapted
with permission from Deutsch, Da˘gdelen, and Johnson 2022 [56]
The result of the first merge, between b and e, is shown on the right in Figure 3.6. The strong
root, b, remains strong following the first merge and the weak root, e, becomes strong.
The result of the second merge, between b and f, is shown on the left in Figure 3.7. This
merge operation has the consequence of leaving the new root, f, with a deficit which reclassifies
the entire subtree as weak. The third merge, between a and c, and the fourth merge, between a
and d, are similar to the first merge and proceed with no complications. The result is included on
the left in Figure 3.8.
71 |
Colorado School of Mines | T T
2 4 2 4
2 2 2 2
c d e f c d e f
-2 -2 -3 5 -2 -3
2 2
1 7 1
7
a b a b
7 3 7 3
S S
Figure 3.7 Left: The result of the second merge between nodes b and f. Right: The result of the
third merge between nodes a and c. Figure adapted with permission from Deutsch, Da˘gdelen, and
Johnson 2022 [56]
The final merge operation between nodes a and e begins by pushing d’s excess along the path
towards the week root. The flow between a and d is reduced by three, the flow between a and e is
increased by three but then there is a problem. The flow between b and e should be reduced by
three (which is the current value of δ), but this would lead to a negative flow which is not allowed.
The flow, therefore, is reduced to zero leaving one unit of excess flow on e and splitting the tree.
T T
2 4 2 4
2 2 2 2
c d e f c d e f
3 -3 1 -1
5 2 2 3
2 1 2 3
a b a b
7 3 7 3
S S
Figure 3.8 Left: The result of the fourth merge between nodes a and d. Right: The result of the
fifth merge between nodes a and e which requires splitting on the arc between b and e. Figure
adapted with permission from Deutsch, Da˘gdelen, and Johnson 2022 [56]
At this point there are no longer any strong nodes with overlying weak nodes and the
algorithm terminates with the ultimate pit indicated by all remaining strong nodes. The sum of
the excess across all of the strong roots is the value of the ultimate pit.
72 |
Colorado School of Mines | 3.1.7 Labeling
Hochbaum describes a labeling scheme to improve the performance of the pseudoflow
algorithm. Intuitively, the labeling scheme forces the algorithm to carefully choose the merger arc
at each iteration; avoiding certain sequences of merger arcs which would require more iterations
that necessary. Specifically, the labeling scheme makes it such that the merger arc between nodes
s and w cannot be used again until the labels on both s and w have increased by at least one.
The original labeling scheme in [54] is as follows: All nodes are initially assigned a label of
one. When selecting a merger arc between a strong node s and a weak node w the algorithm must
select an arc such that the label of w is as low as possible. Once the merger arc is selected, the
label of all strong nodes is set to the maximum of its current label and l +1. These
w
straightforward steps are enough to improve both the theoretical complexity of the pseudoflow
algorithm and the practical performance.
One aspect of this labeling scheme which is not ideal is that once a merger arc is selected this
may trigger a relabeling operation across all strong nodes, of which there could be many hundreds
of thousands. This is not strictly necessary. Chandran and Hochbaum [71] provide an alternate
labeling scheme that delays relabeling and improves the computational efficiency. MineFlow uses
the more performant delayed labeling scheme which is also present in Hochbaum’s implementation
which allows for strong nodes to have different labels throughout the execution of the algorithm.
The labeling scheme primarily becomes important in large problems because it can vastly
limit the number of possible merger arcs available at any iteration while simultaneously ensuring
that those merger arcs will have a substantial effect. This is because labeling encourages
connecting strong nodes to weak nodes that haven’t been considered yet. In the conventional
Lerchs and Grossmann algorithm there is no guidance on which precedence constraint should be
considered, so there is no protection from undesirable sequences. Many Lerchs and Grossmann
implementations continuously loop over all precedence constraints testing if they are between a
strong and weak node until a full loop is completed with no changes. Only then is the algorithm
terminated. However, with labeling incorporated this is not necessary and more efficient stopping
criteria are available.
73 |
Colorado School of Mines | MineFlow, and Hochbaum’s implementation, divide strong roots into a set of buckets which
are differentiated by their label number. At each step a strong node is selected from the bucket
with the highest label number, and considered for possible merger arcs. It is possible to select a
strong root with the lowest label number instead but experimentally this yields poorer
performance. Once no more strong roots are available the algorithm terminates.
If at any stage during the algorithm a strong root is evaluated and no overlying weak nodes
exist then this strong root and its entire subtree are removed from future consideration. Because
the tree is normalized, and there can be no ‘hanging’ weak nodes either, this is a valid and
important practical optimization.
An example showing how labeling works to preclude undesirable merges is included in
Figure 3.9. The value of block c is reduced from 2 to 4 compared to the example in section
− −
3.1.6 and the sequence of merger arcs is modified as shown on the left. Following the first merge
operation between nodes a and e the labels of a and b are increased to two. The state of the
network after the first three merges is shown on the right in Figure 3.9. At this point nodes a,b,d
and e all have a label of 2 and the only allowed merge is between b and f - because f has the
lowest available label. This merge will lead to the immediate termination of the algorithm as all
blocks will become weak and no strong nodes will remain. If the labeling scheme were not used
then the next merger arc could be between b and e or b and f. Either of these choices would
require several additional iterations before the correct answer is identified.
T T
4 4 4 4
2 2 2 2
c d e f c1 d2 e2 f 1
-4 -2 -2 -4 -1 -4
2 1 2 2
3 4* 3
7 3 3
a b a2 b2
7 3 7 3
S S
Figure 3.9 Left: The modified example with a different sequence of merges as numbers on arcs.
Right: The network after three merges. Labels are given as numbers next to the node names.
Figure adapted with permission from Deutsch, Da˘gdelen, and Johnson 2022 [56]
74 |
Colorado School of Mines | 3.1.8 Pseudoflow Complexity
The modifications developed in the preceding sections do not alter the computational
complexity of the pseudoflow algorithm as originally developed by Hochbaum. They do, however,
have an impact on the practical performance (Section 3.3). The computational complexity is
important, as it dictates how the algorithm is expected to behave as the problem size grows. Mine
planning engineers are often using larger and larger block models with more sophisticated
geometrical requirements in order to represent as much of reality as possible in their mine models
and facilitate improved decision making. Therefore, it is useful to understand how the algorithm
might respond to larger models.
Hochbaum, in 2008, showed that the labeling pseudoflow algorithm, with integer block values,
has a complexity of (mnlogn) where m is the number of arcs and n is the number of nodes [54].
O
The complexity can be improved to (n3) and even (nmlog n2 ) by incorporating specific data
O O m
structures [55]. Some of these data structures can be tricky to implement in a manner that their
higher constant time requirements are outweighed by their lower computational complexity. The
applicability of these more sophisticated data structures was not evaluated.
3.2 The MineFlow Implementation
MineFlow is implemented as a C++ library which exposes a few straightforward classes for
defining precedence graphs and computing ultimate pit limits with the modified pseudoflow
algorithm. Additionally a very simple command line executable is provided for smaller explicit
datasets, or more simple datasets with regular block models.
There are three main components to an ultimate pit optimizer: The block values, the
precedence constraints, and the solver itself. Each of these components was carefully designed to
obtain the highest degree of performance and lowest memory requirements.
3.2.1 Block Values
The block values used for the ultimate pit problem are calculated from a wide range of input
variables and parameters as in Section 2.1.3.3. In MineFlow this calculation is assumed to have
been done prior to invoking the library or executable and is considered outside of its scope.
75 |
Colorado School of Mines | The arithmetic required while solving the ultimate pit problem is very simple; block values are
only added, subtracted, and compared with one another or zero. No multiplication, division, or
other more sophisticated operations are required but a careless implementation can still lead to
issues. The core arithmetic must be precise.
Floating point arithmetic, as typically implemented on modern computers, is inadvisable for
the max flow problem. When using floating point numbers there is the potential for loss of
precision and even entering an infinite loop. A very simple network which exhibits this behavior is
given in Althaus and Mehlhorn 1998 [134]. Additionally, Hochbaum’s original developments
regarding the computational complexity of the generic pseudoflow algorithm rely on integer
capacities to ensure either that the total excess of the strong nodes is strictly reduced or at least
one weak node becomes strong [54]. This is used to show that the algorithm always terminates in
a finite number of steps.
Therefore block values in MineFlow must be provided as integer values. This is not typically a
concern for mining engineers, practitioners, and other end users of the library because input
values can be multiplied by a positive constant and rounded to an integer value. One practical
consideration is that the integer values should not be so large as to potentially lead to overflow.
The default data type for block values in MineFlow is a signed 64 bit integer however the GNU
Multiple Precision Arithmetic Library can also be used [135]. This library provides a datatype
which is an arbitrarily large integer limited only by the computer’s available memory. This
precludes overflow but has a negative impact on performance as each individual operation is
slower.
A fundamental tenet of MineFlow, which is further developed in Section 3.2.2, is to minimize
expending effort on steps which are not necessary. A more conventional ultimate pit optimizer
would generally collect all of the block values, define all of the precedence constraints, and then
begin solving for the ultimate pit. However in most real world datasets only a small fraction of
the block values and precedence constraints are ever used. So it is inefficient to be so pedantic in
the implementation. MineFlow must only know all of the positive valued blocks at the onset of
the optimization procedure as it is only the positive valued blocks, and their antecedents, that
could be included in the final answer.
76 |
Colorado School of Mines | The block values are defined as a list of positive block identifiers, their values, and a function
which can be queried for other block values as necessary. This even allows the caller to use
MineFlow in cases where the entire block model is not fully defined and avoid the potential for
inappropriate edge effects when the pit extends beyond the original block model limits. If the
total number of possible blocks is known then it can be provided to MineFlow to avoid having to
use a hash map between block identifiers and nodes within the precedence graph. This can lead
to a decrease in runtime.
3.2.2 Precedence Constraints
Precedence constraints are defined on a per block basis as a list of other block identifiers. It is
very important not to generate all precedence constraints because they are not all necessary and
it wastes a substantial amount of time. In MineFlow precedence constraints are defined via a user
provided callback function that when given a base block identifier returns a, possibly empty, list
of antecedent block identifiers. This allows the solver to only request precedence constraints as
necessary and is responsible for much of the speed improvements in this library over the
commercial implementations which often generate all of the precedence constraints upfront.
Because the solver only considers the user provided block identifiers to define precedence
constraints it naturally supports subblocks or other irregular block models. The only restriction is
that those user provided precedence constraints do not form a cycle. This restriction is not
enforced by the library because it would take extra time to check for cycles and is unlikely to
occur in real applications. The higher level routines which are used to define precedence
constraints will generally prevent cycles.
Users do not expect to provide precedence constraints as lists of other blocks, but instead
prefer to use their geometrical information directly. In practice precedence constraints vary by
both location and direction following geometrical constraints and are often specified by a list of
azimuth slope pairs on a per block basis, Figure 3.10. Many blocks often share the same azimuth
slope pair list because they are considered to be members of the same geotechnical zone. The pit
slope between the given azimuths must be interpolated. MineFlow provides linear and cubic
interpolation for this purpose.
77 |
Colorado School of Mines | 0°N
315° 45°
30°
45°
Azimuth Slope
0° 30°
70°
45° 40° LinearInterpolation
90° 45° 270°W 90°E
CubicInterpolation
135° 45°
180° 45°
270° 40°
225° 115°
180°S
Figure 3.10 An example slope definition with six azimuth slope pairs. Linear and cubic
interpolation for unspecified directions is shown.
If a slope definition, as a list of azimuth slope pairs, is used with an irregular block model then
the list of antecedent blocks for any given base block is not necessarily easy to determine. Sorting
the block coordinates or using an appropriate acceleration structure, such as an R-tree, may be
appropriate in these circumstances. However, most of the time ultimate pits are calculated using
regular block models and pre-computing the set of antecedent blocks for a given slope definition is
warranted. MineFlow implements the minimum search pattern paradigm from Caccetta and
Giannini, introduced in Section 2.1.3.4.
To define a minimum search pattern the slope definition is required along with the block
dimensions and a maximum offset in the z direction. The routine will then determine the smallest
set of offsets that accurately recreates that slope definition for that block model relying on the
transitive nature of precedence constraints in order to include all necessary blocks.
An example minimum search pattern is included in Figure 3.11. This is a plan view of a
regular block model with cubical blocks where the numbers inside blocks indicate the z offset.
The central square is connected to the five blocks immediately above in a cross pattern, but to
recreate 45° pit slopes it is necessary to include additional blocks at higher elevations. The
important feature of this pattern is that it has the fewest number of blocks possible which
corresponds to far fewer precedence constraints.
78 |
Colorado School of Mines | 9 9
9 5 5 9
5 5
3 3
1
1 1 1
1
3 3
5 5
9 5 5 9
9 9
y
x
Figure 3.11 An example minimum search pattern for 45° slopes and a maximum vertical offset of
9 blocks. Numbers in cells are the z offset.
Determining the appropriate maximum z offset has historically been a point of concern. If the
maximum offset is too high then there will be more precedence constraints and the optimization
will take longer. However, if the maximum z offset is too low then the precedence constraints will
not be accurately represented in the ultimate pit which could even have safety implications. The
library developed in this chapter takes three steps to address this concern:
• Using a performant pseudoflow based solver with lazily generated precedence constraints and
all the theoretical and practical improvements developed in this chapter makes it so that the
solver is far less sensitive to the number of precedence constraints. Certainly the number of
precedence constraints should be minimized, but solvers with less efficient implementations
will have a multiplicative slowdown as more precedence constraints are used.
• Minimum search patterns also minimizes the impact of a higher maximum z offset. For
example, a 45° minimum search pattern with isometric blocks only includes new precedence
constraints at z offsets of 1, 3, 5, 9, 13, 17, 19, and 25, up to 25. That is, several maximum
79 |
Colorado School of Mines | z offsets are ‘free’ because the minimum search pattern does not need any additional blocks
to accurately capture all required blocks.
• And finally a method for evaluating the accuracy of a given precedence pattern for a given
block model is developed.
The accuracy of a given precedence pattern is a measure of how close the pattern comes to
achieving the true set of precedence constraints, and efficiency is a measure of how wasteful the
pattern is. A practitioner must decide on how to trade off computational effort versus accuracy,
and always wants the most efficient precedence pattern. Efficiency, in this context, is already
maximized by virtue of using minimum search patterns, but accuracy requires additional effort to
define and measure. Note that the true set of precedence constraints is determined by simply
connecting the base block to every overlying block that is within the provided slope constraints.
The accuracy of a precedence pattern is dependent on the size of the block model on which it
will be used. Again continuing with the isometric block model and 45° slopes if the number of
benches is unrealistically low then it may be acceptable to connect each base block to only the
five blocks above in a cross pattern, the 1:5 pattern. However, this very quickly becomes
unacceptable as the number of benches increases because when this is extended over several
benches the resulting pit looks comically unrealistic; Figure 3.12.
z
y
x
Figure 3.12 Left: The ‘one-five’ precedence pattern extended 30 blocks vertically. Right: the true
set of antecedent blocks for a single base block
80 |
Colorado School of Mines | A numerical method of measuring accuracy is preferred. For this purpose Matthew’s
correlation coefficient is a useful measure [136, 137]. Matthew’s correlation coefficient is defined as
follows in Equation 3.1.
TP TN FP FN 1 1
MCC = × − × + (3.1)
(cid:112)
(TP +FP)(TP +FN)(TN +FP)(TN +FN) × 2 2
Where MCC is Matthew’s correlation coefficient and TP, TN, FP, and FN are the counts of
the true positive, true negative, false positive, and false negatives respectively. This quantity
ranges from 0 to 1 with 1 representing perfect agreement between observed and predicted results
and 0 when there is perfect disagreement. This measure is preferred over the more conventional
accuracy measure in many application as the balance ratios of the confusion matrix categories are
taken into account. That is, in our context, the number of true negatives can be very large and
this really shouldn’t be given as much weight as the false negatives or false positives.
To generate the confusion matrix an empty block model of the appropriate size is constructed
and all blocks are classified as true negatives. The lowest central most blocks is identified and
classified as a true positive. From this block the true antecedents are identified by evaluating all
overlying blocks against the given azimuth slope pairs, initially classifying all of these blocks as
false negatives. Then the pattern is repeatedly applied starting from the initial block reclassifying
false negatives as true positives and true negatives as false positives. Finally, the counts of each
type of block are calculated and Matthew’s correlation coefficient is determined.
With this approach it is possible to numerically quantify the impact of a given maximum z
offset for blocks models of a specific size, and also compare the minimum search patterns with the
other precedence patterns. The results of a simple evaluation of this nature is included in
Figure 3.13. This evaluation continues with a regular isometric block model with 45° slopes, and
considers the Knight’s move pattern discussed in Section 2.1.3.4. Along the y axis is the
calculated Matthew’s correlation coefficient for a block model with a given number of benches,
across the x axis of the figure. Each line corresponds to a particular precedence scheme, the
bolded line corresponds to the Knight’s move pattern and the others are all minimum search
patterns of various maximum z offsets.
81 |
Colorado School of Mines | 1 MSP:25
MSP:17
MSP:9
MSP:5
0.98
0.96 Knight
MSP:3
0.94
0.92
0.9
MSP:1
0.88
5 10 15 20 25 30 35 40 45 50
Benches
Figure 3.13 The slope accuracy of several minimum search patterns when used for block models
with the indicated number of benches.
The performance of the Knight’s move pattern is fundamentally different than the minimum
search patterns because it connects the base block to some blocks with less than 45° slopes. This
causes some false positives whereas the minimum search patterns only ever has false negatives.
Therefore, the minimum search patterns will only ever get worse as the number of benches
increases, although only very slightly.
Most realistic block models have around fifty to seventy benches and the pits very rarely reach
the bottom of the block model. So it seems appropriate to use 17 as a starting max z offset which
connects each base block to 45 overlying blocks, and potentially increase the maximum z offset to
25 (using 61 overlying blocks) in some circumstances.
3.2.3 Pseudoflow Solver
The pseudoflow algorithm as described in Section 3.1.5 translates relatively straightforwardly
into an actual C++ implementation although some care must be taken. There are two main data
types; nodes and arcs, which are stored in two different types of containers called the node pool
and arc pool. The pseudoflow graph itself contains both pools and some bookkeeping information
which keep track of how many nodes exist with a given label and the buckets of strong roots
differentiated by label.
82
)CCM(ycaruccA |
Colorado School of Mines | Each node contains its value, a pointer to the arc which leads towards the source or sink (this
forms the normalized tree), the nodes current label, pointers which form a linked list of
descendants, information regarding that node’s precedence constraints, the original block index,
and a pointer to the original root adjacent arc. This is a relatively large amount of information
for each node so nodes only need to be generated as necessary in order to minimize memory use -
additionally nodes are only ever created during the course of the algorithm (as new antecedents
are required) and not removed. However, during testing it was found that preallocating all of the
nodes had a positive impact on performance despite the higher memory use. This preallocation is
preferred unless the library user desires the lighter implementation.
Arcs contain two pointers to their head and tail (null if this would go to the sink or source
respectively), and the current flow along the arc. The capacity is not included because it is not
necessary. Arcs are kept within a large object pool which also maintains a free list of available
arcs that can be re-used. Arcs are created when merging a weak node with a strong root and
removed during the split procedure.
Beside the node and arc pools the graph maintains the buckets of strong roots as an array of
queues and the label counts within another array. These are relatively small and although vital to
determining the next strong root to process are not hugely important. In general most
components of the pseudoflow solver are kept as simple as possible in order to be as fast as
possible. Facilities are included for reporting various statistics such as the complete elapsed time
and how many merge and split operations occurred.
3.3 Computational Comparison
This chapter has focused on the ideas and implementation details behind an improved
ultimate pit solver that professes to being both correct and more computationally efficient than
commercially available alternatives. This claim must be supported by evidence. Five block
models were collected ranging in size from 374,400 blocks to 16,244,739 blocks and imported into
five different commercial software packages with ultimate pit optimizers. Each problem was also
transformed into the pure max flow format in order to compare with Hochbaum’s original
implementation. In all cases MineFlow computed the ultimate pits in less time.
83 |
Colorado School of Mines | For each package and each problem five runs were completed. The solution times reported in
Table 3.2 are given as the average of the middle three times, excluding the fastest and the slowest
in an effort to minimize the effect of other processes. All computations were completed on a
Windows 10 machine with a 3.70GHz Intel Xeon XPU E3-1245 v6 processor with 32 GB of
available RAM. The times were measured as the complete time as reported by the package. This
necessarily includes the time to read the input and write the output which can vary between
packages, especially because most commercial packages use some proprietary binary format for
their block models.
The precise number of mined blocks and total pit value varies by less than 1% between the
commercial packages due to what appears to be discrepancies in exactly how precedence
constraints are handled. The reported number of precedence constraints in the table is the total
number as generated by MineFlow from an exhaustive search across all blocks. All of these
precedence constraints are given to Hochbaum’s pseudo fifo program, because it is a general
utility for solving the max flow min cut problem and does not generate precedence constraints.
Notably package C also computed identical results to MineFlow for all datasets even though it
generated its own precedence constraints. The commercial packages did not report the number of
precedence constraints that they used.
The commercial packages are anonymized to respect the wishes of some of the software
vendors. All five packages use a network flow based implementation however the precise details
are not published or publicly available. One package uses the Push-Relabel algorithm and the
other four report to use some variant of Pseudoflow.
For the smallest dataset MineFlow reports a solution time of zero seconds. This is because the
entire problem solves in less than 500 milliseconds.
The ‘Copper Pipe’ dataset is interesting, because it is the same size as the McLaughlin dataset
(approximately 3 million blocks) but often solves much slower in the commercial packages.
Specifically with package D where the McLaughlin dataset solves in 34 seconds and the Copper
Pipe dataset solves in four minutes and fifty seconds. This dataset, as the name suggests, consists
of a large vertical porphyry which seems to pose problems for many solvers. This is potentially
because many more nodes need to be explored to classify branches as weak compared to other
datasets.
84 |
Colorado School of Mines | 3.3.1 MineLib Results
MineLib is a library of eleven publicly available test problem instances for three classical open
pit mining problems including the ultimate pit limit problem [138]. A small wrapper around
MineFlow was developed to adapt the MineLib format and compute the ultimate pit solution
using MineFlow.
The high performance server, isengard, at the Colorado School of Mines was used to solve all
eleven problem instances. The server has 48 Intel(R) Xeon(R) E5-2690 v3 @ 2.60GHz processors
and 377 Gigabytes of RAM, although not all of this computing power was used exclusively for
this comparison. In all instances the results as computed by MineFlow were identical to those
reported by Espinoza et al. The problem instances and elapsed solving time, in milliseconds, are
tabulated in Table 3.3.
Table 3.3 Summary information applying MineFlow to the Minelib ‘upit’ problem instances.
Name Number of Number of Precedence Elapsed solving
Blocks Constraints time
newman1 1,060 3,922 2ms
zuck small 9,400 145,640 17ms
kd 14,153 219,778 20ms
zuck medium 29,277 1,271,207 95ms
p4hd 40,947 738,609 59ms
marvin 53,271 650,631 18ms
w23 74,260 764,786 106ms
zuck large 96,821 1,053,105 190ms
sm2 99,014 96,642 40ms
mclaughlin limit 112,687 3,035,483 291ms
mclaughlin 2,140,342 73,143,770 489ms
The precedence constraints for the MineLib problem instances are fully specified in a flat text
file and therefore MineFlow is unable to take advantage of its ability to lazily generate precedence
constraints. This does not seem to greatly increase computation time on these smaller problem
instances.
3.4 Discussion
This chapter has focused on the development of a fast, memory efficient, implementation of
Hochbaum’s Pseudoflow algorithm. The algorithm has been slightly specialized for the ultimate
86 |
Colorado School of Mines | CHAPTER 4
THE ULTIMATE PIT PROBLEM WITH MINIMUM MINING WIDTH CONSTRAINTS
A series of viable approaches to the ultimate pit problem with minimum mining width
constraints are presented in this chapter. The minimum mining width is a fundamental
operational constraint and is unfortunately commonly left as an afterthought for the design
engineer drafting the polygonal pit designs. Incorporating it directly into the ultimate pit problem
is of utmost importance in ensuring that pit designs accurately reflect how they will eventually be
developed and produced. Disregarding operational constraints early in the mine planning process
leads to overestimating the value of a mineral deposit and can cause costly suboptimal decisions.
Section 4.1 introduces the concept of minimum mining width constraints and addresses two
flawed approaches for incorporating them into open-pit designs. These methods are natural and
easy to conceptualize, but suffer from drawbacks when considered carefully.
Section 4.2 describes the formulation and the framework within which all of the subsequent
approaches are developed. The precise form of minimum mining width constraints considered in
this dissertation is documented along with relevant extensions and modifications.
Section 4.3 is a brief diversion which considers a two-dimensional simplified version of the
ultimate pit problem with minimum mining width constraints and develops an extension to
Lerchs and Grossmann’s original dynamic programming algorithm. This extension incorporates
minimum mining width constraints and addresses a minor error in the original description.
Section 4.4 presents the different solvers which take in an input problem and return a pit that
satisfies minimum mining width constraints. This section describes straightforward solvers which
are geometric, and do not consider economic block values, alongside solvers which are full
optimization based approaches.
Section 4.5 discusses methods to reduce the problem size by identifying both inner and outer
bounding pits. An inner bounding pit serves as a baseline solution with a positive contained value
such that these blocks will be contained within the final pit limits. An outer bounding pit is a set
of blocks that necessarily contains the optimal answer. Practically these methods are important
when full 3D models from real mines are used.
88 |
Colorado School of Mines | The methods considered in Section 4.4 vary considerably in terms of solution quality and
required time to compute the solution. Section 4.6 presents a comparison of the methods and
identifies the current best method which is most applicable to large models with dozens of
millions of blocks and hundreds of millions of implicit constraints.
4.1 Minimum Mining Width Preliminaries
Modern open-pit mines are developed using large machinery that require a minimum amount
of operating area to perform their tasks in a safe and effective manner. This operating area
should be incorporated into the design process at an early stage as a constraint so that any
resulting designs are usable with only minor modifications. Ignoring these operational constraints
will necessarily lead to designs which overestimate the value of the deposit and do not provide
adequate information for downstream decision making and design.
There are two natural approaches to incorporating minimum mining width constraints that
are often considered as potential solutions. The most common approach is to ignore minimum
mining width constraints, solve for the ultimate pit, and then modify the result manually.
Another approach is to try to reblock the model to use larger blocks that inherently satisfy the
minimum mining width constraints. Neither of these approaches are ideal.
4.1.1 Manually Enforcing Minimum Mining Width Constraints
Many practitioners faced with creating an open-pit mine design that is operational do not
have the tools to do so directly. However, they generally do have a conventional ultimate pit
optimizer. It is natural to solve for the ultimate pit and then try and manually modify that result
into an operational design.
The most common method of doing this is to display the ultimate pit on planar sections and
then draw polygons around the mined blocks which are of an adequate size. Most CAD packages
have some additional tooling for this purpose and will often add extra elements on screen, such as
a user defined circle with a radius equal to the operating width that follows the cursor. This
process, in cartoon form, can be seen in Figure 4.1.
The bench section displayed in Figure 4.1 is at a low elevation in the ultimate pit, which is
where minimum mining width violations occur. The area currently being contoured on the left
89 |
Colorado School of Mines | side of the figure is of sufficient size so very few modifications are necessary, however the northern
collection of blocks is likely to be excluded from the manually cleaned design regardless if that is
the correct decision or not. This small spur of blocks could be included in the final design by
mining additional blocks to the east or west, or some combination of expansion or contraction to
achieve the highest value. Most practitioners manually incorporating minimum mining width
constraints will not evaluate all of the possible alternatives and instead use their best judgment.
Similarly the collection of mined blocks in the north east of the section which currently consists of
three small collections of blocks is problematic. Some of the blocks could be included in a
mineable polygon but it is unclear how best to proceed from only reviewing the display.
y
x
Figure 4.1 Manually contouring a block section. Shaded blocks are included in the ultimate pit.
Many open-pit designers will lean towards removing unmineable collections of blocks instead
of incorporating additional waste. This is because when that freshly included waste is near the
bottom of the pit it will often require extracting more waste higher up due to precedence
constraints. However those blocks which are mined at the bottom of the pit are exactly the
positive blocks which pay for the blocks above them. Excluding them will necessarily reduce
profit which may have been better used to pay for additional waste blocks. Removing unmineable
blocks at the bottom of the pit is not always the wrong thing to do. However, trying to
incorporate minimum mining width constraints manually is unlikely to yield the optimal result.
90 |
Colorado School of Mines | Another issue with manually enforcing minimum mining width constraints is that it is very
easy to miss non-obvious improvements to contained value. The example two-dimensional
ultimate pit model with a two block minimum mining width in Figure 4.7 demonstrates this issue.
In this example a practitioner might remove the lowest block (of value three) to satisfy a two
block minimum mining width constraint, but this is overly conservative and misses out on
expanding the pit at a higher elevation towards the right-hand side of the section. By expanding
the pit an additional block, of value two, near the top right can also be mined for profit and help
pay for the mining width. In realistic 3D models these sorts of possibilities for recouping value
used to enforce minimum mining width constraints will almost certainly be missed unless more
formal optimization methods are used.
Finally, this manual process takes a long time and must be repeated for each and every pit
that is to be evaluated. If a series of calculated pits are to be used in the context of uncertainty
analysis or as a part of a sensitivity study this manual process is unusable. A computerized
optimization approach to enforcing minimum mining width constraints is desired.
One advantage of enforcing minimum mining width constraints manually is that the process
can be combined with enforcing several other objectives that are hard to quantify or hard to
model. An open-pit mine design is not complete without incorporating bench access into the
design by adding necessary ramps and access roads. Additionally other operational or geometrical
constraints might be challenging to capture in the optimization model but straight forward to
incorporate by an experienced designer. In these situations enforcing minimum mining width
constraints might be one of a myriad of other manual constraints that must be considered, and
when considered in conjunction with these other objectives might not be a great concern.
However, Even in these circumstances having minimum mining width constraints already satisfied
will necessarily improve the subsequent manual design process.
4.1.2 Re-blocking
Block models, as discussed in Section 2.1.3.1, are the primary means of modeling open-pit
mine designs and form the basis for the ultimate pit problem with or without minimum mining
width constraints. Geologists, geotechnical engineers, and mining engineers will generally work
together to balance the block sizes in order to accurately represent the deposit geology and
91 |
Colorado School of Mines | facilitate open-pit mine planning. These block sizes are invariably smaller than the required
operating width, which can be quite large, and when solving with conventional methods that
consider each block independently; the results will not satisfy minimum mining width constraints.
A natural suggestion for addressing this issue is to use larger blocks that, when mined
independently, would form an operational design.
These larger blocks could be used by the geologists and geostatisticians in the earlier modeling
phase as well, however due to volume variance and dilution concerns this might not be desired.
An alternative is to take the block model with smaller blocks and re-block it to a model with
larger blocks that combines blocks together using the appropriate means. In a re-blocking
procedure the economic block value variables should generally be summed together to form the
larger block, whereas any grade variables may need to be averaged. However there are several
downsides to using a re-blocked model.
Minimum mining width constraints are planar in nature and rely on ensuring that equipment
and operations have enough space laterally to perform their tasks. There is no vertical component
to a minimum mining width constraint, but re-blocking an input block model to contain blocks
that are laterally expansive but still only as tall as the bench height will cause two problems.
First it is very difficult to correctly represent precedence constraints on blocks that have abnormal
aspect ratios. A bench may be 10 to 20 meters tall, but a typical operating width could be on the
order of 50 meters. While it is possible to create precedence constraints between blocks of this
shape, recreating the usual pit slopes of 40-50 degrees will not lead to desirable pits. Base blocks
will be connected to blocks that are immediately above them, but blocks that are offset in the x
and y directions can only be included after several benches without enforcing pit slopes that are
far too shallow.
Additionally, it is not clear exactly where to start the re-blocking in order to achieve the best
result. There is no innate reason to start re-blocking from the frontmost, leftmost block, and this
may not be the best location. A different origin for the re-blocked model might lead to a better
result, and this choice should be evaluated. In computing the term ‘aliasing’ is often used to
describe the jagged appearance of curved or diagonal lines on low-resolution displays which is
similar to this problem.
92 |
Colorado School of Mines | Re-blocking to blocks that are larger than the bench height in the z direction is possible and
somewhat avoids the issue with representing precedence constraints. However the aliasing issues
remain, but now increased to the z dimension. Finally, block models with larger blocks can
incorporate undesirable dilution, and are limited to rectangular operating areas.
If at all possible it is better to avoid re-blocking to satisfy operational constraints. An
approach that instead enforces small collection of blocks in operational shapes is preferred. In this
fashion precedence constraints remain unchanged, there is no unnecessary dilution, and the
operating areas can be constructed in the most appropriate shape for any given mine.
4.2 The Ultimate Pit Problem with Minimum Mining Width Constraints
A suitable formulation of the ultimate pit problem with minimum mining width constraints
must satisfy three required criteria. These criteria are:
1. The minimum mining width constraints must not interfere with the aims of the original
ultimate pit optimization. The results must still satisfy precedence constraints and must
still maximize the total value of the mined blocks.
2. The minimum mining width constraints must adequately capture and model the real world
concept of ensuring adequate operating area for heavy machinery in all areas of the open-pit
mine. They should be flexible enough to accommodate the typical operating widths that are
encountered in real world problems.
3. And finally, the minimum mining width constraints must be suitable for problems that are
of a real world size. That is, they should be applicable to situations where the total number
of blocks is in the tens or even hundreds of millions and there may be hundreds of millions of
precedence constraints, and now hundreds of millions of minimum mining width constraints.
These criteria guide the formulation towards being as simple as possible and to being an
extension to the original ultimate pit problem. The main formulation in this chapter is the
maximum satisfiability formulation in Deutsch 2019 [121], but is presented as an integer
programming problem in order to facilitate certain extensions and several of the solution methods
in the subsequent sections.
93 |
Colorado School of Mines | The main idea is to represent minimum mining width constraints as arbitrary sets of blocks
that when mined together form a valid operational area. This allows for complete flexibility in
specifying operational areas, that can even vary throughout the mined area. Additionally, as
these mining width sets are generally of a very similar form they can be defined implicitly, and do
not need to be explicitly generated. This is similar to how precedence constraints are handled.
Typical operational areas consist of small contiguous collections of blocks. It is generally more
efficient to specify minimum mining width constraints with a ‘template’ of blocks instead of on an
individual per-block basis. Several example templates are shown in Figure 4.2. These templates
collect several blocks in the x and y dimensions of the block model into small mineable groups
that, when all mined together, represent a valid operational area.
y
More suitable Less suitable
x
Figure 4.2 Example minimum mining width constraint templates
As indicated in Figure 4.2 the templates corresponding to smaller operational areas are
generally more suitable. This is because incorporating minimum mining width constraints has a
substantial impact on runtime. Larger mining width sets, on the order of 50 or more blocks per
set, are more difficult to satisfy and should generally be avoided. This is still in keeping with the
third original criteria, as most open-pit mines use blocks of a middling size - for example a mine
modeled with 10 by 10 meter blocks may have an associated operating area diameter of 40 or 50
meters. The theory and algorithms developed herein do not enforce any arbitrary limit though,
this is primarily a practical consideration.
Figure 4.3 shows two ultimate pit limits. The pit on the left is the conventional ultimate pit
which does not satisfy minimum mining width constraints. The pit on the right satisfies a roughly
5x5 minimum mining width (the fourth template from the left in Figure 4.2). The outlined areas
are shown in more detail in Figure 4.4.
94 |
Colorado School of Mines | 4.2.1 The Main Integer Programming Formulation
The foundational formulation for the ultimate pit problem with minimum mining width
constraints is as follows. Auxiliary variables are used, one for each mining width set, in order to
ensure the resulting ultimate pit is operationally feasible.
Sets:
• b , the set of all blocks.
∈ B
• ˆb ˆ , the set of antecedent blocks that must be mined if block b is to be mined.
b
∈ B
• w , the set of all mining widths.
∈ W
• ¯b ¯ , the set of blocks that are within mining width w.
w
∈ B
• w¯ ¯ , the set of mining widths of which block b is a member.
b
∈ W
Parameters:
• v , the economic block value of block b.
b
Variables:
• X , 1 if block b is mined, 0 otherwise.
b
• M , 1 if mining width w is satisfied, 0 otherwise.
w
The Ultimate Pit Problem with Minimum Mining Width Constraints:
(cid:88)
maximize v X (4.1)
b b
b∈B
s.t. X X 0 b ,ˆb ˆ (4.2)
b
−
ˆb
≤ ∀ ∈ B ∈
Bb
M
w
X¯b 0 w ,¯b ¯
w
(4.3)
− ≤ ∀ ∈ W ∈ B
(cid:88)
X M 0 b (4.4)
b w¯
− ≤ ∀ ∈ B
w¯∈W¯
b
0 X ,M 1 b , w (4.5)
b w
≤ ≤ ∀ ∈ B ∀ ∈ W
X integer b (4.6)
b
∀ ∈ B
96 |
Colorado School of Mines | Function 4.1 is the objective function which is the same as for the conventional ultimate pit
problem. The aim is to maximize the undiscounted value of the mined blocks. Inequality 4.2
specifies the precedence constraints which are again the same as the conventional ultimate pit
problem. The purpose of these constraints is to enforce geotechnically stable designs and preclude
underground mining.
Inequality 4.3 contains the first set of new constraints, which are called the assignment
constraints. These constraints restrict the value of M to be zero for a given width w if not all of
w
the blocks within that width are mined. That is, M can only be one if all of its associated blocks
w
are also one. By themselves the assignment constraints don’t do anything to modify the results
from the original ultimate pit problem. Inequality 4.4 then specifies the enforcement constraints
which, when combined with the assignment constraints, enforce operational areas in the resulting
pit model. The enforcement constraints are on a per block basis, and are interpreted as: block X
b
can be a one if and only if at least one of its associated auxiliary variables are also one.
Inequality 4.5 precludes unreasonable variable values, and Inequality 4.6 enforces integrality to
prevent partially mining blocks.
4.2.1.1 Mathematical Nature of the Enforcement Constraints
Even with the added complexity of the enforcement constraints (Inequality 4.4) it would be
very convenient if the model retained a nice mathematical structure similar to the original
ultimate pit problem. That is, it would be nice if the constraint matrix were still totally
unimodular or if a network optimization approach was still applicable.
The values within the constraint matrix are contained within 1,0,1 , but even very simple
{− }
systems are not totally unimodular. The tiny model with three blocks and three mining widths
illustrated in Figure 4.5 demonstrates this.
The expanded enforcement constraints for this model follow in Inequalities 4.7 to 4.9.
X M M 0 (4.7)
1 1 2
− − ≤
X M M 0 (4.8)
2 1 3
− − ≤
X M M 0 (4.9)
3 2 3
− − ≤
97 |
Colorado School of Mines | X
3
M M
2 3
X M X
1 1 2
Figure 4.5 Small example mining width configuration with three variables and three minimum
mining width constraints consisting of two variables each
The submatrix corresponding to the M variables is isolated in Equation 4.10 and has a
determinant of 2.
(cid:12) (cid:12)
(cid:12) 1 1 0 (cid:12)
(cid:12)− − (cid:12)
(cid:12) 1 0 1(cid:12) = 2 (4.10)
(cid:12)− − (cid:12)
(cid:12) 0 1 1(cid:12)
− −
Thus this formulation does not contain a totally unimodular constraint matrix. The addition
of the enforcement constraints, which are necessary to enforce operational mining areas, has
ruined the nice mathematical structure of the ultimate pit problem and additional effort to
minimize this is warranted.
The enforcement constraints are covering constraints which are essentially toggled on and off
by their associated block variable. In a logical context they can be considered as or constraints,
which is how they were developed in the original maximum satisfiability formulation. The
constraint is satisfied if the block is not mined, X 0, or one of the associated mining width
b
←
auxiliary variables is mined, M 1.
w
←
4.2.1.2 Reducing the Number of Enforcement Constraints and Auxiliary Variables
There are a lot of enforcement constraints. In Inequality 4.4 there is one enforcement
constraint for every block in the original model which is on the order of tens of millions of
constraints for a realistic model. Each of these constraints contains several non zero columns,
98 |
Colorado School of Mines | although realistically most models should contain on the order of a few dozen up to maybe fifty.
However, that is still far too many for a conventional solver to handle.
The first simplification is that enforcement constraints and associated minimum mining widths
are only required on positive valued blocks in the original model. Non positive valued blocks are
only mined when a positive block below them is mined. If those positive valued blocks satisfy a
minimum mining width then typically the non positive blocks above them will also.
Finally, the enforcement constraints can be implemented in a deferred or lazy fashion as in a
realistic model there may be on the order of several thousand violating blocks in the ultimate pit
to begin with. Depending on the approach used this can vastly reduce the actual number of
enforcement constraints and auxiliary variables required.
4.2.1.3 Mathematical Nature of the Assignment Constraints
The assignment constraints (Inequality 4.3) are identical to the precedence constraints. These
constraints form a totally unimodular substructure similar to the precedence constraints in the
original ultimate pit problem. In Sections 4.4.6 and 4.4.7 this property is exploited.
4.2.1.4 Alternative Assignment Constraints
During development an alternative version of assignment constraints was considered. This
alternative version is documented in Inequality 4.11 which could be used to replace Inequality 4.3
in the original formulation.
M +X (cid:88) X b(cid:48) 1 w ,¯b ¯ (4.11)
w b w
− 1 ≤ ∀ ∈ W ∈ B
w
b(cid:48)∈Bwb(cid:48)(cid:54)=b |B |−
Where denotes the number of blocks in mining width set w. One interpretation of this
w
|B |
constraint is that: The only way M w and X b are both 1 is if all X b(cid:48) are also 1. This second type
of assignment constraint is more complicated, and requires values in the constraint matrix that
are not in 1,0,1 . There is no change in the number of constraints with this method. There
{− }
are more non zeros in the constraint matrix, however the outcome is the same and integer
solutions are identical.
The differences arise when considering a linear relaxation of both types of assignment
constraints. This second form, Inequality 4.11, leads to a looser linear relaxation with a higher
99 |
Colorado School of Mines | economic value that is calculated much quicker. In practice a tighter linear relaxation is preferred
during a typical branch and bound approach to solving the integer programming problems, but
the increase in speed can be substantial.
One example problem with 13 million rows and 724 thousand columns contained 38 million
non zero entries with the original formulation, and 185 million non zero entries with this
secondary formulation. The objective value achieved in the linear relaxations was 28.295 million
and 28.417 million respectively. The total time required to achieve these results was 7 hours and
41 minutes with the original formulation compared to 21 minutes with this alternative.
Note that fractional values in the constraint matrix can be avoided by multiplying each
constraint by 1. This is recommended to avoid any numerical instability issues associated
w
|B |−
with adding fractions together.
4.2.1.5 Precluding Other Inoperable Block Configurations
This formulation for the ultimate pit problem with minimum mining width constraints does
not preclude all unmineable configurations of blocks. For example the single unmined block on
the left in Figure 4.6, or the non ideal ‘peanut’ combination of two mining widths on the right of
Figure 4.6.
y
x
Figure 4.6 Left: An inoperable configuration of blocks permitted by the original formulation.
Right: Another inoperable configuration
Neither of these block configurations are realistic, and should potentially be excluded from a
truly operational ultimate pit design. One method of excluding these configurations would be to
not only require the mined blocks to satisfy minimum mining widths - but also the unmined
100 |
Colorado School of Mines | blocks. This somewhat unconventional approach would require that any unmined blocks are a
part of a suitable collection of other unmined blocks that is large enough.
Another approach would be to add additional constraints on the M variables themselves that
w
preclude certain undesirable configurations. For example two nearby minimum mining width sets
could only be mined if all of the minimum mining width sets between them were also mined.
4.2.1.6 Hierarchical Minimum Mining Width Encoding
Especially with rectangular minimum mining width sets there is a strong element of
self-similarity in this problem. As an example, if a 4 4 minimum mining width area was desired
×
for a regular block model this could be encoded hierarchically. Minimum mining width sets could
be constructed as per usual for 2x2 collections of blocks, with each associated auxiliary variable
given four assignment constraints for the four relevant blocks. Then a second layer of minimum
mining width constraints could be defined for 3x3 collections of blocks, that are assigned if the 4
overlapping 2x2 collections of blocks were mined. Finally, a third layer of minimum mining width
constraints for the desired 4 4 groups of blocks could be assigned in terms of the four contained
×
3x3 collections of blocks.
In such an encoding there are more auxiliary variables than the conventional formulation,
however for large sections there will be fewer constraints. The number of assignment constraints,
ND, with direct 4 4 mining width sets is given in Equation 4.12.
×
ND= (n 3) (n 3) 16 (4.12)
x y
− × − ×
This is for a single bench with n blocks in the x direction and n blocks in the y direction.
x y
With the hierarchical encoding the number of assignment constraints, NH, is given in Equation
4.13.
NH= (n 1) (n 1) 4+(n 2) (n 2) 4+(n 3) (n 3) 4 (4.13)
x y x y x y
− × − × − × − × − × − ×
In practice the complexity and added auxiliary variables overshadow the very slight reduction
in the number of assignment constraints, and this hierarchical encoding is not currently
recommended.
101 |
Colorado School of Mines | corresponds to the value of mining the entire column of blocks above each block. This algorithm
uses depth, indexed with d, instead of the z coordinate because this simplifies the description of
the algorithm and is a more efficient memory order in this application. An example input
transformation is shown in Figure 4.8.
x
d 0 0 0 0 0 0 0 0 0 0
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 2 -1 2 -1 -2 -2 -2 -2 -2 -2 1 -2 1 -2
-2 -2 -2 1 5 4 -2 -2 -2 -2 -4 -4 -4 -1 3 2 -1 -4 -1 -4
z -2 -2 -2 2 3 2 -2 -2 -2 -2 -6 -6 -6 1 6 4 -3 -6 -3 -6
x
Figure 4.8 Example input transformation from an economic block value model (left) to the
cumulative value model (right). Note the extra row of ‘air’ blocks and the change of indices.
The size of the cumulative value model and the mining width, n , are then used to define a
w
3D volume for the dynamic programming iterations denoted V . This volume is indexed with
x,d,w
x 0,1,...,n 1 for the x coordinate along the section, d 1,0,1,...,n 1 for the depth
x z
∈ { − } ∈ {− − }
with 1 being the special row of air, and w 0,1,...,n 1 . The w dimension of this volume
w
− ∈ { − }
serves as a counter which prevents the pit contour from going ‘up’ unless the minimum mining
width requirement is met. A w index of 0 indicates that no blocks have yet been mined
horizontally. Only when w = n 1 is the growing pit permitted to go up a bench.
w
−
The two starting cases are:
V = 0 (4.16)
0,−1,0
V = C (4.17)
0,0,0 0,0
Where Equation 4.16 corresponds to not mining the first column, and Equation 4.17
corresponds to beginning the pit in the top left and starting to mine immediately. From this point
forward the iteration proceeds down along the depth, d, fastest, then ‘across’ the mining widths
w, and finally along columns x slowest. The special air row is handled as follows:
V = max V ,V (4.18)
x,−1,0
{
x−1,−1,0 x−1,0,nw−1
}
104 |
Colorado School of Mines | Read Equation 4.18 as: the value is the maximum of the value if the pit continues not mining
(V ), or if the pit stops mining provided that w is n 1. Note that it is important to
x−1,−1,0 w
−
handle the air row in this fashion where it is clearly delineated between not mining (value stays
the same), and stopping mining (where the value becomes that of the best pit to the left). This
idea was missed in the original description from Lerchs and Grossmann, without minimum mining
width constraints, and could lead to incorrect results when the dataset consists of two disparate
pits. The main cases are then split between three cases for the different possible values of w.
V
x,d,0
= C
x,d+w m(cid:48)< an xw(cid:8)
V
x−1,d−1,w(cid:48)(cid:9)
d = 1 (4.19)
w(cid:48)=0 (cid:54) −
Read Equation 4.19 as we can drop down one bench from any width, provided we reset our
width to zero. This is the only way to go down a bench.
V = C +V w 1,w < n 1 (4.20)
x,d,w x,d x−1,d,w−1 w
≥ −
Equation 4.20 simply assigns the value over one block on the same bench and increases the
width counter potentially allowing the pit to go upwards by one bench later via Equation 4.21.
V
x−1,d,w−1
V = C +max V (4.21)
x,d,nw−1 x,d x−1,d,w
V ,d < n 1
x−1,d,w w
−
The three cases in Equation 4.21 can be understood as follows:
• Continue mining horizontally along the current bench and ‘finish’ the mining width
constraint.
• Continue mining horizontally along the current bench, even though the mining width has
already been satisfied.
• Decrease the depth of the pit by one bench (by considering the value of the block at d+1),
and maintaining the completed mining width constraint.
The approach requires just under n (n +1) n iterations, as some cells of the full
x z w
× ×
volume are invalid: either because they are unnecessary (for example when d = 1 and w > 0), or
−
correspond to unreachable blocks (d > (x w)), and a few other edge cases. Each iteration looks
−
105 |
Colorado School of Mines | at a constant number of previous blocks which is typically less than or equal to three, except
when w = 0 when it is required to look at n other cells. This requirement to look at n other
w w
cells when w = 0 does not increase the algorithmic complexity of the algorithm. This implies an
algorithmic complexity of (n (n +1)n ), essentially (n3) where n is max n ,n +1,n . If
x z w x z w
O O { }
one considers the size of the mining width n as an input to the problem then the algorithmic
w
complexity of this approach is technically pseudo-polynomial, as it depends on the value of the
input mining width constraint.
Practically this algorithm has only marginally more value that the original 2D dynamic
programming algorithm discussed in Section 2.2.7. It could ostensibly be applied in the same
manner: on sections of a full 3D block model and subsequently smoothed as described in Johnson,
1971 [73]. Realistically the full 3D approaches described in the subsequent sections are to be
preferred.
4.4 Solution Methods
The primary objective of this chapter is to provide solution strategies which are practical and
applicable to very large block models and realistic datasets. It is insufficient to propose a single
technique or solution methodology and expect it to work well across all possible inputs, or expect
it to perform well in all scenarios. Therefore a wide ensemble of techniques are evaluated.
4.4.1 The Ultimate Pit
One possible strategy is to relax Constraints 4.3 and 4.4 and just solve for the ultimate pit
without any mining width constraints at all. If the resulting pit happens to satisfy the mining
width constraints already, either by mining nothing at all, or because the deposit is tabular in
nature, then no additional work is necessary. The addition of minimum mining width constraints
necessarily reduces the value by excluding certain configurations of blocks, but if those
configurations do not occur then the optimal answer will not change.
The conventional ultimate pit is defined to be the smallest maximum valued closure of the
precedence graph, but there may be larger pits with the same value. One option would be to
compute the largest ultimate pit as well, and see if that one has better luck at satisfying minimum
mining width constraints. The pseudoflow algorithm as implemented in Chapter 3 computes the
106 |
Colorado School of Mines | smallest closure, and it is not very straightforward to modify it to compute the largest - unlike in
the Lerchs and Grossmann algorithm where it suffices to modify a few strict inequalities in the
move toward feasibility procedure. One way to compute the largest ultimate pit is as follows:
• Count the number of nonnegative blocks, call this q
• Multiply each block value by q, and if the value is nonnegative add 1.
This procedure amounts to increasing all values by some very small epsilon that is large
enough to mine blocks that were previously zero, but not so large as to change the results. It may
be important to take necessary steps to avoid overflow with this technique as the flow values
become very large. The minimum cut of this modified dataset is then the largest minimum cut of
the original problem.
Of course these approaches are na¨ıve, and are not expected to find the optimal result except in
very unlikely situations. These approaches are excluded from the computational comparison
because they are not guaranteed to generate feasible pits, although the original Ultimate-Pit is
documented for comparison.
4.4.2 Floating ‘Fat’ Cones
The floating cone procedure originally described by Pana in 1965 is easily extended to handle
minimum mining width constraints [58, 108]. The floating cone algorithm was described in
Section 2.2.5.
To contend with minimum mining width constraints one can use the operating width as the
base element of each cone instead of a single block. All minimum mining width sets which contain
a positive valued block are a potential cone bottom and are considered in turn. If the full mining
width set and all of its antecedent blocks have a net positive value they are extracted, and the
floating process is continued. Once all of the mining width sets have been considered the process
is repeated until a full scan is completed with no positive valued cones identified and no change to
the overall pit.
The result necessarily satisfies the minimum mining width constraints - it is the union of
satisfying pits. And the result necessarily has a positive value - only positive valued subsets were
incorporated. However this approach does nothing to address the two fundamental flaws of
107 |
Colorado School of Mines | floating cone methods: overmining and undermining. The optimal results are not expected with
this approach even for simple datasets, but it is included in the computational comparison.
One area of latitude within the floating cone algorithm is the order in which the cone bottoms
are considered. For the evaluation the five following orderings were evaluated:
• In the Float-Bottom-Up solver the cone bottoms are sorted, by their coordinates, ascending
through x, y, z.
• In the Float-Top-Down solver the cone bottoms are sorted, by their coordinates, descending
through x, y, z.
• In the Float-Random solver the cone bottoms are shuffled randomly.
• In the Float-Value-Ascending solver the cone bottoms are sorted by their contained
economic value from lowest to highest.
• In the Float-Value-Descending solver the cone bottoms are sorted by their contained
economic value from highest to lowest.
All of the floating cone solvers suffer from poor runtime and are unable to find the optimal
solution in many scenarios. The poor runtime is generally because for each cone bottom a large
portion of the block model has to be scanned, and even if the cone is not removed in the first pass
it will have to be re-examined again if any other cones are removed that may impact it. A lot of
time is wasted checking negative valued cones with many blocks to see if they are still negative.
One minor wrinkle with the ‘value ascending’ and ‘value descending’ solvers is that simply
reordering the cone bottoms by value once at the beginning of each pass is not totally accurate.
When a pit is removed the value of many other cone bottoms may change which invalidates the
initial ordering. One approach is to store the potential cone bottoms in a heap which is sorted by
contained value. As cones are removed any affected cones can have their new value calculated and
pushed into the heap as well. When a cone comes up for possible extraction its value is first
checked if it is current, and only then possibly extracted. In practice this modification is overkill
for a slow, heuristic approach with explicit downsides.
108 |
Colorado School of Mines | 4.4.3 Geometric Methods
Geometric methods begin with the ultimate pit, and then heuristically work to modify that
pit to satisfy minimum mining width constraints. These methods do not consider the economic
block value of any blocks, and instead are essentially operating with a new objective function that
minimizes the number of blocks changed from the conventional ultimate pit to some new
satisfying pit. The idea being that minimizing the number of blocks changed is a good analogue
for reducing the value as little as possible.
The Subset solver computes the ultimate pit and then creates a new pit consisting of only
those mining widths that were satisfied. As previously mentioned this has the unfortunate
consequence of removing precisely those blocks with a net positive value that support the blocks
above them. However this approach is very straightforward to implement and efficient to
calculate, requiring only a simple linear pass over the end results following pit optimization. In
some respects this approach is similar to the approach adopted by many pit designers faced with
computing a satisfying pit manually.
Similar to how the subset only removes blocks from the ultimate pit in order to create a
feasible solution, the superset solvers only add blocks to the ultimate pit. In this way the ultimate
pit is contained entirely within the final pit. Unlike the subset approach, however, there is no
straightforward way to ensure that the superset pit is as small as possible without resorting to
expensive enumeration or another optimization problem. The Superset-Width solver evaluates
each possible width that could be incorporated into the solution that has at least one block
already mined. Those widths are sorted ascending by how many blocks are required and greedily
incorporated in sequence until the result fully satisfies. Similarly the Superset-Cone solver
performs a similar process but looks not only at the blocks in the width, but counts all of the
blocks in the entire cone of extraction. This takes longer but generally includes fewer blocks.
The next class of geometric solvers are based on ideas borrowed from mathematical
morphology [117]. There are two fundamental operators in mathematical morphology which have
straightforward analogues for this problem.
• Erosion operates by retaining only the blocks of the current solution such that all of the
widths of which they are a member are fully mined. This is essentially a complete
109 |
Colorado School of Mines | contraction of the entire pit, even along the pit walls in the higher benches. The resulting
pit following erosion does not necessarily satisfy minimum mining width constraints.
• Dilation operates by incorporating all blocks into the current solution such that at least one
of the blocks within a containing width is mined. This is a complete expansion of the entire
pit, and necessarily satisfies minimum mining width constraints.
These operators can be combined into a cleaning heuristic. In mathematical morphology the
closing is the result from applying dilation followed by erosion. This tends to ‘close’ holes within
the input because the dilation will incorporate new blocks that may not be removed by the
subsequent erosion. The opening is the result from applying erosion followed by dilation. This
operation removes small groups of blocks that are not large enough to survive the erosion
operation. Both of these can be applied to ultimate pit models and yield new ‘cleaned’ pits that
will have fewer (with closing), or no (with opening) minimum mining width violations.
These operators are very quick, requiring only a handful of linear scans over the blocks and
widths. However, the closing is not guaranteed to satisfy minimum mining width constraints and
like all other purely geometric approaches does not consider block value when deciding which
blocks to remove or incorporate into the solution. An additional drawback of the mathematical
morphology approaches is that they are not able to take advantage of the optimization described
in Section 4.2.1.2 where the total number of enforcement constraints is reduced by only
incorporating them on the positive blocks. Both erosion and dilation require that all of the
mining width sets, even on non-positive blocks, are explicitly specified.
For these reasons the mathematical morphology based heuristics are not incorporated into the
computational comparison in Section 4.6, however they were implemented. For the unreduced
bauxite dataset on the left in Figure 4.3 the ultimate pit contains 74,412 blocks of which 608 do
not satisfy minimum mining width constraints. The pit following closing contains 75,627 blocks
(an increase of 1,215) of which 485 are unsatisfying (a decrease of 123 blocks). This increase in
mining decreases the contained value from $28,416,592 to $27,673,437, by 2.6%. The pit following
opening contains 72,629 blocks (a decrease of 1,783) and fully satisfies minimum mining width
constraints because opening ends with a dilation operation. This reduction in pit size decreases
the contained value to $26,913,569 which is a reduction of 5.3%.
110 |
Colorado School of Mines | 4.4.4 Meta-heuristic Methods
The appeal of meta-heuristic methods for solving the ultimate pit problem with minimum
mining width constraints is that they can generally obtain reasonable solutions in a relatively
short time. There is, unfortunately, no guarantee that the result is optimal. However for the very
largest problems, that likely remains out of reach for all methods considered in this dissertation.
Meta-heuristics are discussed in more detail in Section 2.5.5.
Many meta-heuristics require subroutines for the following operations: generating random
satisfying solutions, efficiently modifying a given solution, and combining two solutions together
into a third solution. Additionally it may be useful to be able to perform a ‘hill-climbing’ step on
a solution to obtain a very similar solution with a better objective value. Each of these operations
for the ultimate pit problem with minimum mining width constraints are discussed in this section.
Random satisfying pits are generated by randomly assigning values to the width variables and
then applying a ‘flood fill’ or breadth first search to satisfy precedence constraints. The resulting
pits satisfy both minimum mining width constraints and precedence constraints by construction.
The optimal solution could be generated with this procedure although it is unlikely. Most real
world problems have tens of thousands of width variables that are zero or one for a large solution
space. Experience suggests that the optimal solution consists of fewer ones than zeros, so the
random distribution used to initialize the width variables should be skewed.
For completeness, and to verify that the more sophisticated solutions considered in this
dissertation are worthwhile, several of these random solvers are included in the computational
comparison. Each random solver is given 20 minutes to generate and evaluate as many random
pits as possible with the given proportion of zeros. The final pit is just the best out of those
generated within the time limit. In practice this approach is primarily used within any other
meta-heuristics to generate initial solutions.
One method for slightly modifying or mutating given solutions is to select a nearby width, for
example one such that at least one of its contained blocks is mined, and then mine the rest of it
along with any necessary antecedents. This grows the pit a small amount by incorporating
additional blocks and if used within the context of simulated annealing or a genetic algorithm is
useful. One advantage of this modification is that it does not require regenerating the entire pit
111 |
Colorado School of Mines | and all constraints remain satisfied. The opposite form of mutation, where blocks are removed, is
more difficult. Removing blocks, even if done by removing an entire width at a time, may yield a
solution that no longer satisfies precedence or minimum mining width constraints. Alternatively a
‘shrinking’ approach could assign the value of a width variable to zero - but this requires
completely regenerating the pit from the remaining widths.
A simple crossover operation is to stochastically take width assignments from each parent
solution and generate the resulting pit. This is a uniform crossover. Again, the main variables of
interest are the auxiliary width variables, and not the block variables which are harder to work
with directly and can instead be derived from any given width assignment.
Hill-climbing is achieved by evaluating very nearby solutions in a matter similar to the floating
cone algorithm. Any width variables which are currently assigned zero can be evaluated with a
linear search to identify the contained value of the unmined blocks. If that value is positive then
they are included in the solution. This suffers from the same overmining and undermining issues
present in all floating cone approaches but does yield a higher valued solution, and can be used to
improve a solution within its local neighborhood.
These techniques are implemented in the Evolutionary solver. This genetic algorithm
maintains a population of elites (high valued solutions from previous generations), immigrants
(fresh randomly generated solutions), cross-overs (combinations of two elites), and mutants (small
changes to elites). At each iteration the candidate solutions, which are pits formed from width
assignments, are partially sorted and the lower quality solutions are discarded. Then the
appropriate number of cross-overs, mutants, and immigrants are generated.
Meta-heuristics generally suffer from having a large parameter space which all have a
non-negligible impact on performance and the resulting solution quality. A genetic algorithm
requires the user to specify the appropriate population size, crossover rate, number of generations,
and several other parameters. Simulated annealing requires an appropriate cooling schedule which
is non-obvious and difficult to get correct. For the computational comparison in Section 4.6
reasonable parameters were sought, but the solutions were found to be of lower quality than the
optimization based techniques which follow. Additional effort may be warranted to determine
better parameters to use, higher quality unit operations, or more advanced techniques for
population control and improved convergence.
112 |
Colorado School of Mines | 4.4.5 Commercial Solvers
For completeness the entire problem can be given to a commercial branch and bound integer
programming solver such as CPLEX or Gurobi, [139, 140]. These solvers first apply a ‘presolve’
operation to transform the model into an equivalent model such that the presolved model has the
same feasibility and bounded properties, and such that the objective value of the presolved model
is identical to the original model. Then the solution to the linear programming relaxation is
sought typically with primal simplex, dual simplex, or by an interior point method such as the
barrier algorithm with crossover. Finally branch and bound is used to identify the optimal
solution by fixing certain variables to integer values and solving many more linear relaxations.
These solvers are the products of extensive work and research into the general integer
programming paradigm and perform well across all manner of inputs. However, in this context,
they are not designed to handle this specific problem and, unless the problem is very small, do
not perform well. The Gurobi solver is present in the computational comparison with a time limit
of 20 minutes.
4.4.6 Lagrangian Relaxation Guided Search
The primary integer programming formulation developed in Section 4.2.1 is very similar to the
original ultimate pit problem by construction. The main complication stems from the addition of
the enforcement constraints in Constraint 4.4. It seems natural, therefore, to remove those
constraints by dualizing them into the objective and penalizing any transgressions following the
tenets of Lagrangian relaxation introduced in Section 2.5.3. One notable aspect of this approach
is that the second half of the minimum mining width constraints (the assignment constraints –
Constraint 4.3) do not need to be dualized because they are exactly the same as the precedence
constraints.
This approach removes, in some sense, the troublesome enforcement constraints but introduces
several other problems. It is now necessary to determine appropriate dual multipliers, or penalties,
on the dualized enforcement constraints which can prove difficult and computationally expensive.
Additionally, this approach takes aim at the linear relaxation of the original problem and is not
guaranteed to determine the optimal integer solution. However when carefully implemented with
necessary extensions this approach proves to be quite effective even for very large problems.
113 |
Colorado School of Mines | 4.4.6.1 Lagrangian Formulation
The original formulation and Lagrangian relaxation with the enforcement constraints dualized
into the objective function are as follows.
Sets:
• b , the set of all blocks.
∈ B
• ˆb ˆ , the set of antecedent blocks that must be mined if block b is to be mined.
b
∈ B
• w , the set of all mining widths.
∈ W
• ¯b ¯ , the set of blocks that are within mining width w.
w
∈ B
• w¯ ¯ , the set of mining widths of which block b is a member.
b
∈ W
Parameters:
• v , the economic block value of block b.
b
• λ , the dual multiplier for the enforcement constraint associated with block b. The best
b
values for these ‘parameters’ are sought. Note λ 0 b .
b
≥ ∀ ∈ B
Variables:
• X , 1 if block b is mined, 0 otherwise.
b
• M , 1 if mining width w is satisfied, 0 otherwise.
w
Original Formulation:
(cid:88)
maximize v X (4.22)
b b
b∈B
s.t. X X 0 b ,ˆb ˆ (4.23)
b
−
ˆb
≤ ∀ ∈ B ∈
Bb
M
w
X¯b 0 w ,¯b ¯
w
(4.24)
− ≤ ∀ ∈ W ∈ B
(cid:88)
X M 0 b λ (4.25)
b w¯ b
− ≤ ∀ ∈ B
w¯∈W¯
b
0 X ,M 1 b , w (4.26)
b w
≤ ≤ ∀ ∈ B ∀ ∈ W
114 |
Colorado School of Mines | Lagrangian Relaxation:
(cid:88) (cid:88) (cid:88)
maximize (v b λ b)X b+ λ¯bM w (4.27)
−
b∈B w∈W ¯b∈Bw
s.t. X X 0 b ,ˆb ˆ (4.28)
b
−
ˆb
≤ ∀ ∈ B ∈
Bb
M
w
X¯b 0 w ,¯b ¯
w
(4.29)
− ≤ ∀ ∈ W ∈ B
0 X ,M 1 b , w (4.30)
b w
≤ ≤ ∀ ∈ B ∀ ∈ W
Function 4.27 is the objective function which contains the simplified and collected dualized
enforcement constraints. Inequality 4.28 are the unchanged precedence constraints, and Inequality
4.29 are the unchanged assignment constraints. Finally Inequality 4.30 are the unchanged
variable bounds. Note that this problem was, and remains, a maximization problem and because
the enforcement constraints were of the form AX 0, their dual multipliers, λ , are restricted to
b
≤
be non-negative.
In order to better facilitate the following steps the objective, Function 4.27, slightly changes
the indices and range on the summation in Inequality 4.25. This is to collect the λ variables on a
b
per mining width basis, which is more clear. This small algebraic manipulation does not change
the results.
4.4.6.2 Interpretation
First, recognize that the resulting Lagrangian relaxation model, when the λs are fixed, is of
precisely the same structure as the original ultimate pit problem without minimum mining width
constraints. We can take the dual of the relaxed Lagrangian which can be solved with the
pseudoflow algorithm, and the highly efficient MineFlow implementation developed in Chapter 3.
In the new network the M variables become nodes, and the assignment constraints become
additional precedence constraints.
This relaxation has a nice interpretation. The original enforcement constraints are on a per
block basis, and require each mined block to be a part of a completely mined out width. When an
enforcement constraint in the original formulation is binding then its associated dual has a
positive value, now dubbed λ . This value has units which are equivalent to those of the objective
b
115 |
Colorado School of Mines | function and can be understood to be the ‘price’ of any given enforcement constraint. That is, it
is exactly the cost associated with satisfying minimum mining width constraints for that block.
This cost must be paid by the original block value. So the new ‘block values’ in our relaxation
are v λ . If λ is too large; the price of satisfying enforcement constraints exceeds some
b b b
−
unknown threshold, then the block will not be mined and X will be zero.
b
This payment goes towards the width variables of which that particular block is a member.
This is the second part of the new objective function (Function 4.27). Mining widths originally
contain no inherent value, but they adopt the value that is provided by any of their contained
blocks in order to pay for any of the negative blocks which they contain.
Thus positive valued blocks are able to pay to extract some of their neighboring blocks in order
to satisfy minimum mining width constraints, albeit indirectly. An example helps clarify this.
4.4.6.3 Example
Consider the very small ultimate pit problem with minimum mining width constraints in
Figure 4.9. This example has seven blocks, six precedence constraints, two mining width sets
(each consisting of two blocks), and four assignment constraints. For this example consider only a
single enforcement constraint on block X which is X M M 0.
2 2 1 2
− − ≤
X 4 X 5 X 6 X 7 -30 -30 -30 -30
z X 1 M 1 X 2 M 2 X 3 20 80 10
x
Figure 4.9 A very small example ultimate pit problem with minimum mining width constraints.
Left: the seven block variables and two auxiliary variables. Right: The block values.
Applying the Lagrangian relaxation to the enforcement constraint and dualizing the resulting
model yields the flow models in Figure 4.10. On the left the dual multiplier, λ , for block X is
2 2
zero, and for the right λ = 15. When λ = 0 the result is the original ultimate pit, and when
2 2
λ = 15 the ‘value’ of the central block is reduced by 15 and that value is routed ‘back’ to the
2
mining widths. Solving this flow model for the ultimate pit now mines the block to the left and
the necessary overlying block (X and X ) which is the optimal solution.
1 4
116 |
Colorado School of Mines | T T
30 30 30 30
30 30 30 30
-30 -30 -30 -30 -30 -30 -30 -30
20 80 10 20 65 10
0 0 15 15
80 65
20 10 20 10
0 0 15 15
S S
Figure 4.10 The small example’s corresponding flow problems. Left: With a dual multiplier of 0.
Right: With a dual multiplier of 15.
The value of 15 for the dual multiplier was somewhat arbitrarily chosen, and deserves more
careful attention. In this example there are two breaking points. When λ 10 the solution to
2
≤
the flow problem is the original ultimate pit, and does not satisfy minimum mining width
constraints. When λ > 20 the solution to the flow problem is too large corresponding to mining
2
all seven blocks, which is not the best solution either. This corresponds to not using enough funds
from X to pay for minimum mining width constraints and using too many funds respectively.
2
This problem is compounded for larger and more complicated models where determining the
correct dual multipliers is a serious challenge.
Another concern made clear through this example is an apparent imbalance in the change in
value. The block value associated with X is reduced by 15, but the total available value increases
2
by 30. 15 for each mining width. This is because the enforcement constraint (X M M 0)
2 1 2
− − ≤
when dualized into the objective function has a coefficient of 1 on both M variables. Theoretically
this is the correct thing to do, but it seems slightly off. This is a consequence of having or
constraints, where no particular minimum mining width constraint is preferred over another.
Finally, when the full linear relaxation of this model is solved and the dual on the enforcement
constraint is calculated it has a value of 10. This is the most correct dual multiplier to use, at it
corresponds to the exact cost of the enforcement constraint at optimality. If the constraint were
117 |
Colorado School of Mines | not in place, then no M variables would be 1 and the left hand side of the example (the blocks
with values 20 and -30) would not be mined. 10 dollars is ‘lost’ to create the more operational pit
design with a mining width of two blocks. However, if this dual multiplier is plugged into the
relaxation precisely, the identified pit will be too small (because pseudoflow always identifies the
smallest maximum valued closure), and does not satisfy the minimum mining width constraints.
4.4.6.4 The Lagrangian Relaxation Guided Search
Applying Lagrangian relaxation to the enforcement constraints inspires a new approach to
solving the ultimate pit problem with minimum mining width constraints. The Lagrangian
relaxation guided search follows in Algorithm 4.
Following the developments in Chapter 3 it is now possible to solve very large ultimate pit
problems rapidly, especially when only a few block values are changed between iterations. This is
necessary for this Lagrangian relaxation inspired approach, because the number of ‘blocks’ has
increased substantially, each mining width set becomes another ‘block’, and the number of
precedence constraints has also increased, each assignment constraint is another precedence
constraint.
At each iteration if the computed pit already satisfies minimum mining width constraints it
might not be the optimal solution because the dual values might be too high, and certain blocks
might oversatisfy. In this case the associated dual multipliers on the oversatisfying blocks should
be decreased to guide the solution towards the situation where each minimum mining width
constraint is only just satisfied. If the dual multiplier is already zero, no action is necessary as this
corresponds to the situation where the minimum mining width constraints are satisfied ‘for free’.
If the computed pit does not satisfy minimum mining width constraints then there are some
blocks that are mined, but not enough of the blocks near to them are mined. Therefore the dual
multiplier on these blocks should be increased in order to reduce its value and use that value to
fund neighboring blocks and satisfy the minimum mining width constraints. So that this iteration
is not wasted it is prudent to evaluate nearby solutions that do satisfy minimum mining width
constraints, either by computing subsets, supersets, or other nearby solutions as discussed in the
preceding sections. If any of these nearby feasible solutions satisfy and are better than the best so
far, they are recorded as the current best solution.
118 |
Colorado School of Mines | Algorithm 4: Algorithm to compute an ultimate pit which satisfies minimum mining
width constraints.
Initialize a new flow network where every block and every mining width set is a node;
for All positive blocks do
Connect the source to this block with an arc whose flow and capacity are equal to the
block value;
for All negative blocks do
Connect this block to the sink with an arc whose flow and capacity are equal to the
absolute value of the block value;
for All mining width sets do
Connect the source to this node with an arc whose initial flow and capacity are equal
to zero;
Initialize all dual multipliers, one for each positive block, to zero;
Initialize current best solution to the empty set;
while Maximum iterations have not been completed do
Solve for the minimum cut using pseudoflow;
if Minimum cut satisfies minimum mining width constraints then
if Minimum cut is better than current best solution then
Set the current best solution to this cut;
else
for Several nearby satisfying pits do
if This pit is better than current best solution then
Set the current best solution to this pit;
for Each positive block do
if The block is mined, but does not satisfy minimum mining width constraints then
Increase the dual multiplier;
Decrease the capacity on the arc from the source to this block by the increase;
Increase the capacity on the arcs from the source to all containing mining width
sets by the increase;
if The block is mined, and oversatisfies minimum mining width constraints then
Reduce the dual multiplier;
Increase the capacity on the arc from the source to this block the decrease;
Decrease the capacity on the arcs from the source to all containing mining width
sets by the decrease;
if No change in dual multipliers then
Break;
Return the best solution;
119 |
Colorado School of Mines | 4.4.6.5 Updating the Dual Multipliers
Each dual multiplier, on each positive block, is bounded theoretically from below by zero and
heuristically from above by the original block value. It is natural to begin with dual multipliers of
zero, because the problem in this case is the same as the original ultimate pit problem and if this
satisfies no further work is required. Unsatisfying blocks must have their duals increased, but by
how much is unclear. Each dual multiplier does not have an independent effect on the solution.
The point of this approach is that blocks can work together to satisfy minimum mining width
constraints and keep duals as low as possible. Working through the interrelationships between
mining widths to identify the best possible solution is what makes this problem non-trivial.
The subgradient optimization approach, discussed in Section 2.5.3, tends to work well in
practice and can be simplified for this specific problem. In Equation 2.17, b is zero so first
compute the helper vector C which is AXt. The helper vector encodes on a per block basis the
mining width satisfaction, that is: when C is zero the block is either not mined or perfectly
b
satisfied with exactly one containing width mined. When C is one, then the block does not
b
satisfy mining widths, and finally when C is negative then the block oversatisfies.
b
(cid:88)
C = X M (4.31)
b b w¯
−
w¯∈W¯
b
The new dual multiplier is then:
v s C
λt+1 min max λt + b × t × b ,0 ,v (4.32)
b ← { { b C } b }
|| ||
Where s is the step size for iteration t. By construction 0 < s 1. In practice starting s
t t t
≤
around 0.4 and setting s s 0.8 every few iterations has given good results, although it is a
t+1 t
← ×
parameter to consider and potentially investigate for specific problems.
4.4.6.6 Discussion
The Lagrangian relaxation guided search is motivated by duality theory and has a strong
intuitive justification. It also works well in practice. The Lagrangian-Subgradient solver in the
computational comparison implements this approach, and is very successful.
120 |
Colorado School of Mines | The efforts to improve the pseudoflow algorithm and develop a fast, flexible solver in Chapter
3 are highly relevant to the practical success of this approach. Many problems require solving
hundreds or even thousands of large constructed ultimate pit problems, and a fast ultimate pit
solver is very relevant. However there are additional avenues to explore that could potentially
improve the practical performance of this method.
Subgradient optimization is a well suited approach for determining and updating the dual
multipliers however, in this problem it can be prone to oscillation. When a block doesn’t satisfy
minimum mining width constraints, its dual is increased – but if this increase is too large then it
is likely to oversatisfy on the next iteration, and therefore must be reduced. To a certain extent
this is mitigated by using a steadily reducing step size, but it may be possible to smooth the dual
multipliers more aggressively, potentially by averaging the dual multipliers over the last few
iterations.
The subgradient optimization approach elects to update all of the dual multipliers for every
block that are currently unsatisfying or oversatisfying on each iteration. In certain datasets this
may be contributing to the oscillation and slower convergence. It may be worthwhile to update a
subset of the dual multipliers on some steps, and instead elect to leave certain dual multipliers
fixed for a while.
One idea, that was not evaluated, is to use a meta-heuristic to ‘optimize’ the dual multipliers.
A population of dual multipliers could be maintained, and combined through, for example, a
particle swarm optimization methodology that may yield higher quality duals faster. Particle
swarm optimization could take advantage of the subgradient in addition to the objective value.
This is a potential area for future research.
4.4.7 The Bienstock Zuckerberg Algorithm
In Chapter 5 the Bienstock-Zuckerberg algorithm is used to solve the general block scheduling
problem with a wide range of constraints. In the block scheduling problem each block can
generally be assigned to one of several different destinations, and mined in one of several different
time periods. Additionally there are different classes of constraints to enforce capacity
requirements, blending, and handle various other concerns. However, the block scheduling
problem is a superset of the ultimate pit problem in terms of formulation complexity. So by
121 |
Colorado School of Mines | 4.5.1 Inner Bound
An algorithm to compute an inner bounding pit for the ultimate pit problem with minimum
mining width constraints follows in Algorithm 5.
Algorithm 5: Algorithm to compute an inner bounding pit which satisfies minimum
mining width constraints.
Initialize the set of current positive blocks to be the set of all positive blocks;
while The set of current positive blocks is not empty do
Solve for the minimum cut using pseudoflow;
Remove all positive blocks which do not satisfy minimum mining width constraints;
Return the last minimum cut;
In practice this algorithm very rarely terminates with the empty set for realistic models.
There generally exists some set of positive valued blocks that satisfy the minimum mining width
constraints and are themselves a valid self-supporting ultimate pit. These blocks necessarily are
within the final solution and can be removed to reduce the problem size substantially.
4.5.2 Outer Bound
The outer bound can be computed by solving a single constructed ultimate pit instance
following the Lagrangian relaxation approach discussed in Section 4.4.6. Simply set each dual
multiplier to the maximum possible value for each block (λ v ), and solve. The resulting pit
b b
←
necessarily satisfies minimum mining width constraints, and could never be any larger. This
substantially reduces the problem size in all case studies considered.
Note that the original ultimate pit is not necessarily contained within this bound. Only the
ultimate pit which satisfies minimum mining width constraints is.
4.6 Solution Comparison
Several different approaches to the ultimate pit problem with minimum mining width
constraints were proposed in this chapter. A computational comparison was executed in order to
draw broad initial conclusions about the proposed approaches effectiveness and suitability in real
world applications. This comparison is not comprehensive, and there are several areas where it is
very difficult to make a proper and fair evaluation of the solvers. It is important to note that the
comparison presented herein might speak more to the quality of the author’s implementations
123 |
Colorado School of Mines | than anything else. As demonstrated in Chapter 3, it is generally possible to apply additional
software engineering effort and some theoretical knowledge in order to tremendously decrease the
runtime of any given solver.
The solvers considered in this comparison are summarized in Table 4.1. All of the solvers,
except for Gurobi, were implemented by the author in approximately 9,000 lines of C++ code.
Reasonable effort has been expended to develop the solvers with an emphasis on speed and
minimizing memory use while accurately implementing the relevant ideas. Where appropriate,
reasonable parameter values were sought through experimentation, such as the step size schedule
in the Lagrangian search or the population sizes and crossover rate for the evolutionary solver. In
solvers where convergence is not guaranteed a time limit of 20 minutes was imposed.
Table 4.1 The solvers used in the computational comparison
Solver Name Section Brief Description
Ultimate-Pit Section 4.4.1 Solve for the ultimate pit, and hope it satisfies.
Float-Bottom-Up Section 4.4.2 Floating cone heuristic from the bottom up.
Float-Top-Down Section 4.4.2 Floating cone heuristic from the top down.
Float-Random Section 4.4.2 Floating cone heuristic in a random order.
Float-Value-Ascending Section 4.4.2 Floating cone heuristic by width value ascending.
Float-Value-Descending Section 4.4.2 Floating cone heuristic by width value descending.
Subset Section 4.4.3 Largest subset of ultimate pit.
Superset-Width Section 4.4.3 Superset of ultimate pit, by evaluating widths.
Superset-Cone Section 4.4.3 Superset of ultimate pit, by evaluating cones.
Random-3 Section 4.4.4 Random search solver, 75% zeros.
Random-15 Section 4.4.4 Random search solver, 94% zeros.
Random-31 Section 4.4.4 Random search solver, 97% zeros.
Evolutionary Section 4.4.4 Evolutionary algorithm solver.
Gurobi Section 4.4.5 The Gurobi Version 9.5.2 commercial IP solver.
Lagrangian-Subgradient Section 4.4.6 The Lagrangian relaxation guided solver.
Bienstock-Zuckerberg Section 4.4.7 The Bienstock-Zuckerberg algorithm.
Combined Section 4.4.8 The Combined Lagrangian and BZ algorithm.
Six datasets were collected ranging from a very small 2D synthetic dataset to a large realistic
3D block model from a real proposed gold mine, Table 4.2. The same dataset can be used with
many different minimum mining width templates or precedence constraints. However, the
primary focus of this chapter is on the minimum mining width constraints - so precedence
constraints are held constant at 45° with nine benches of connections. In the computational
124 |
Colorado School of Mines | comparison the dataset name has a suffix appended to indicate the minimum mining width size.
That is, bauxitemed 2x2, is the BauxiteMed dataset with 2 2 minimum mining width
×
constraints. Similarly biggold 5x5c is the BigGold dataset with a 5 5 minimum mining width
×
template with the corners removed. The five different mining width templates considered are the
suitable templates shown in Figure 4.2.
Table 4.2 The datasets collected for the computational comparison
Dataset Name Block Model Size Brief Description
Sim2D76 75 1 40 Small synthetic 2D dataset simulated with sequential Gaus-
× ×
sian simulation.
BauxiteMed 120 120 26 3D bauxite dataset estimated with ordinary Kriging.
× ×
MclaughlinGeo 140 296 68 Well known Mclaughlin Dataset.
× ×
CuCase 170 215 50 A laterally expansive copper dataset simulated with sequen-
× ×
tial Gaussian simulation.
CuPipe 180 180 85 A steeply dipping porphyritic copper dataset.
× ×
BigGold 483 333 101 A large finely estimated gold dataset.
× ×
The high performance server used to run all experiments is called isengard, and is a large
Linux machine made available remotely for all Colorado School of Mines students. The server has
48 Intel(R) Xeon(R) E5-2690 v3 @ 2.60GHz processors and 377 Gigabytes of RAM, although not
all of this computing power was used exclusively for this comparison. All of the solvers, excluding
Gurobi, are single-threaded, which is a useful potential avenue for future research. Even the
Lagrangian guided solver could be multi-threaded by evaluating different paths and delegating the
local search subroutine to another thread.
4.6.1 Problem Bounding
Inner bounds and outer bounds were computed for the raw datasets following the procedures
described in Section 4.5. Relevant statistics are tabulated in Table 4.3, including the compute
time required to prepare the smaller equivalent problem. For the original ultimate pit problem
without minimum mining width constraints bounding is less necessary because solving for the
ultimate pit is already very fast. These steps are highly effective at making the problems more
manageable, and are a necessary part of the overall process for realistic datasets.
125 |
Colorado School of Mines | The effectiveness of the bounding procedure necessarily decreases as the size of the minimum
mining width template increases. It is much more difficult to identify a large inner bound when
the subset step in the bounding procedure removes many more blocks.
The large reduction in problem size is expected for realistic datasets constructed with regular
block models. Many practitioners use large models that extend beyond the ultimate pit limits
with predominately negative valued blocks. These blocks are identified during the outer bounding
process and quickly removed.
The bounding procedure described in Section 4.5 is also generally very quick. Even with the
largest model considered with the largest minimum mining width constraint template the process
completes in less than two minutes. This is because the bounding procedure leverages the
improvements to ultimate pit optimization developed in Chapter 3, and the bounds are computed
by solving several specially constructed ultimate pit problems.
Recall that the number of enforcement constraints is the same as the number of positive
blocks, which is a useful metric for the size of the problem, and can be used to evaluate the
effectiveness of the bounding procedure.
126 |
Colorado School of Mines | 4.6.2 Solvers Performance vs Gurobi
Gurobi was able to solve eight of the 23 datasets to optimality within 20 minutes each. The
largest of these, bauxitemed 4x4c, contains 1,104,542 rows, 92,805 columns, and 2,335,746
nonzeros in the constraint matrix. These datasets are of interest in evaluating the proposed
solution methods because the optimal answer is known.
The small 2D datasets; sim2d76 3x1, sim2d76 5x1, sim2d76 8x1 are so small that all solvers,
even the general purpose solver Gurobi, finish in under a second so the only relevant metric for
comparison is the solution quality, or objective function value, which is summarized in Table 4.4.
The ultimate pit value is included, even though the ultimate pit does not satisfy minimum mining
width constraints, in order to provide additional information.
The most relevant takeaway from this comparison is that for very small models the
Lagrangian guided search solver, the evolutionary meta-heuristic approach, and the BZ solver are
all able to find the optimal answer, whereas the other primarily geometric approaches often
perform quite poorly. One interesting outcome was that for sim2d76 8x1 the floating cone
approach was able to identify the optimal solution.
Table 4.4 Value achieved on the three very small 2D datasets by each solver.
Solver sim2d76 3x1 sim2d76 5x1 sim2d76 8x1
Ultimate-Pit 691 2,791 6,301
Gurobi 482 2,424 5,114
Lagrangian-Subgradient 482 2,424 5,114
Evolutionary 482 2,424 5,114
Bienstock-Zuckerberg 482 2,424 5,114
Combined 482 2,424 5,114
Float-Bottom-Up 234 2,262 5,114
Float-Top-Down 234 2,215 2,237
Float-Random 234 2,262 5,114
Float-Value-Ascending 234 2,262 1,632
Float-Value-Descending 234 2,262 1,632
Subset 90 671 4,181
Superset-Width 455 0 0
Superset-Cone 238 2,188 668
Random-3 0 0 0
Random-15 152 1,837 4,471
Random-31 152 1,053 4,471
128 |
Colorado School of Mines | In these instances the time taken and objective function value achieved is relevant, however to
save space several of the solvers are combined together and only the best result is reported.
There are a few important takeaways from these larger datasets that are still small enough for
branch and bound. In all cases the Lagrangian relaxation guided solver, the Bienstock-Zuckerberg
algorithm, and the combined approach are better than the geometric and other meta-heuristic
approaches developed in this chapter. The evolutionary solvers effectiveness has decreased rapidly
as the problem size has increased, in all cases even under performing relative to the floating cone
based algorithm. This is expected, because the evolutionary solver does not fundamentally take
advantage of the structure of the problem in the way that the Lagrangian relaxation guided
solver, BZ algorithm, or the floating cone algorithm does.
The Lagrangian relaxation guided solver is able to achieve within 8% of optimal for all
problems in terms of objective function value. It is also generally faster, especially for the larger
datasets where the branch and bound algorithm present in Gurobi begins to suffer from its
exponential complexity.
The BZ solver is able to achieve within 7% of optimal for all problems, and in many cases is
much closer. BZ consistently outperforms the Lagrangian relaxation guided solver in runtime for
these small problem, but achieves a lower objective value in two of the five datasets.
The combined approach is the best of those considered. This is to be expected, as any benefits
from the Lagrangian relaxation guided solver are adopted directly as possible solutions into the
set of orthogonal columns within the broader context of the BZ algorithm.
4.6.3 Solver Performance On Large Datasets
The fifteen remaining datasets were too large for Gurobi, but the available results are
summarized in Table 4.6. Where possible the linear relaxation value as computed by Gurobi, and
the BZ algorithm, is reported. In the nine cases that Gurobi was able to solve the linear
relaxation it matched with the value reported by the prototype BZ algorithm. This gives an
approximate cost of the minimum mining width constraints when compared to the ultimate pit
value, but in general the linear relaxation of the ultimate pit problem with minimum mining
widths takes advantage of partially mining blocks to contribute to the enforcement constraints
while minimizing excess waste. It is not clear what the actual gap between the linear relaxation
130 |
Colorado School of Mines | objective and unknown optimal integer objective is.
The best value achieved by the Lagrangian relaxation guided solver, the BZ algorithm using
the Aras procedure for integerization, and the combined approach is reported alongside how long
it took to achieve the result. A time limit of 20 minutes was enforced for all solvers.
The cupipe dataset exhibits strange behavior. Gurobi is unable to even solve any of the linear
relaxations after 20 minutes, but the Lagrangian search and BZ algorithm terminate after only a
handful of iterations. Owing to the steeply vertically dipping nature of the ore body it turns out
that the ultimate pit very nearly satisfies the minimum mining width constraints. For the
cupipe 2x2 dataset only five blocks are initially unsatisfied, and their duals are quickly
determined. In the cupipe 4x4c dataset it appears that the optimal solution is to mine nothing,
which due to the initial inner bounding process implies that the solved inner subset pit was
optimal, or at least very nearly so. The other datasets consist of many hundreds or thousands of
unsatisfying blocks in the ultimate pit which interact in complicated overlapping ways that the
algorithms must untangle.
The biggold 4x4c and biggold 5x5c are the only cases where limiting the 20 available
minutes to a single approach, instead of the combined solver, was able to achieve a higher
objective value. For these two datasets the Lagrangian solver obtained a slightly higher value.
131 |
Colorado School of Mines | 4.7 Discussion
This chapter has focused on the development, and analysis, of a novel, efficient, formulation of
the conventional ultimate pit problem with minimum mining width constraints. Several methods,
ranging from geometric heuristics to full blown iterative optimization approaches based on
Lagrangian relaxation, were developed and compared with general purpose commercial solvers.
The BZ algorithm with operational constraints was also evaluated, although the details are left to
Chapter 5. A strong emphasis was placed on ensuring that the developed approaches and
techniques were readily applicable to real world datasets, across a wide range of deposits and
mining operations, and in realistic applications.
Both the Lagrangian relaxation guided search and the BZ algorithm are able to compute high
quality results in a reasonable amount of time on large datasets which exceed the capabilities of
more general purpose solvers. These developments will allow open-pit mine planning engineers to
make better decisions throughout their mine planning efforts and avoid the costly and error-prone
manual process of incorporating operational constraints.
Additionally, a small, but pedagogically useful, algorithm for the two-dimensional ultimate pit
problem with minimum mining width constraints was developed. This algorithm is not applicable
to real world problems, but helps to elucidate several aspects of this problem and may have
further applications in future developments.
133 |
Colorado School of Mines | CHAPTER 5
THE BLOCK SCHEDULING PROBLEM WITH OPERATIONAL CONSTRAINTS
The block scheduling problem contends with many more constraints than the ultimate pit
problem, and is far more complicated. This chapter presents initial work and provides
formulations for incorporating minimum mining width constraints into the block scheduling
problem, additionally this chapter provides some initial insight into how these larger problems can
be solved with the Bienstock-Zuckerberg algorithm. The current best practice approach to solving
the direct block scheduling problem, the Bienstock-Zuckerberg algorithm, is applied and discussed.
Section 5.1 introduces the block scheduling problem and describes two of the formulations
which were suggested by Johnson in 1968 [44] and are the most commonly used. The first
formulation uses so-called at variables, which are natural, but are not always the most efficient
choice. The second uses by variables which have nice mathematical properties, especially with
precedence and sequencing constraints. Finally, this section introduces some of the operational
constraints that are most relevant in block scheduling.
Section 5.2 describes how minimum mining width constraints and minimum pushback width
constraints can be incorporated into the block scheduling problem with auxiliary variables in a
similar fashion to the ultimate pit problem with minimum mining widths discussed in Chapter 4.
Formulations are provided for both at and by variables.
Section 5.3 discusses how the Bienstock-Zuckerberg algorithm may be applied to solve direct
block scheduling problems with operational constraints. Relevant concerns regarding
integerization, variations to the conventional Bienstock-Zuckerberg algorithm, and
implementation details are discussed.
Finally, Section 5.4 presents two brief case studies applying the outlined approach to a very
small dataset and the realistic, well known McLaughlin gold mine dataset. The impact of relevant
operational constraints on the NPV of the open-pit mining project is evaluated.
5.1 Block Scheduling Preliminaries
The block scheduling problem is more complex than the ultimate pit problem and contains
more variables, more constraints, and does not afford a nice network flow based solution. Where
134 |
Colorado School of Mines | the ultimate pit problem has a single variable which is set to 1 if a block is mined and 0
otherwise, in the block scheduling problem a set of variables are required for each block in order
to specify when that block will be mined and how it will be routed. This increases the
dimensionality of the solution space but allows for an objective function which aims to maximize
NPV instead of simply the contained undiscounted economic value.
Often, yet another dimension is added to each block variable to indicate which process that
block is routed to. In the ultimate pit problem the economic block value of any given block
assumes that the block goes to the highest value process available. In the block scheduling
problem this routing is often left as a choice to be determined by the optimization procedure
alongside realistic constraints on the destinations. For example, generally only so many blocks are
allowed to be routed to the mill within a particular time period.
Additionally, some block scheduling problems consider stockpiling, uncertainty, interactions
with other mining complexes, and more. One of the fundamental challenges of the direct block
scheduling problem is that each formulation is typically customized for a particular mining
application, and it is difficult to specify a general purpose formulation that is usable in all
scenarios. However, there are some common elements, and many constraints are of a similar
mathematical form. In the following two sections the two essential forms of most block scheduling
problems are presented.
5.1.1 At Variables
The first main type of block scheduling formulation uses variables which are called at
variables, and are specified to be 1 if a block is mined at a particular time and sent to a specific
destination. This is the most natural method for specifying many of the other types of constraints
in the block scheduling problem. A typical block scheduling problem using at variables may begin
with the following:
Sets:
• b , the set of all blocks.
∈ B
• ˆb ˆ , the set of antecedent blocks that must be mined if block b is to be mined.
b
∈ B
• t , the ordered set of all time periods (often yearly).
∈ T
135 |
Colorado School of Mines | Where Equation 5.1 is the objective function using the pre-computed discounted block values.
The constraints in equation 5.2 are the reserve constraint which enforces that each block is mined
at most once.
Equation 5.3 define the precedence constraints. Because this formulation uses at variables a
precedence constraint between block b and ˆb can be read as, before the base block b is mined to
any destination all of its antecedent blocks must have been mined to some destination in some
earlier or equivalent time period. The t(cid:48) t limit on the second summation on the righthand side
≤
of Equation 5.3 is what requires the set of time periods to be an ordered set.
Equation 5.4 contains the mining capacity constraints on a per period basis. Simply ensuring
that the total mined tonnage in a period, regardless of which destination is chosen, is below some
threshold.
Equations 5.5 and 5.6 are the maximum and minimum capacity thresholds on a per
destination basis for each time period. The minimum capacity is generally used to ensure that the
mill receives a sufficient quantity of ore within a period to remain operational. Some destinations,
such as a waste dump, may not have a minimum capacity.
Equation 5.7 is an example of a blending constraint that enforces a lower bound on the
average grade requirements at each process destination in each period.
Finally, Equation 5.8 enforces integrality on the at variables, and precludes meaningless
variable values.
5.1.2 By Variables
A common reformulation used to simplify the precedence constraints, and to facilitate
decomposition approaches such as the BZ algorithm, is to replace at variables with by variables.
A by variable is 1 if and only if the block is mined “by” time period t˘, (i.e no later than period t˘).
Similarly it is possible to apply this concept to the destinations, although the “by” name is a bit
out of place in that context. For this the possible destinations must be ordered, and a by variable
may be defined such that it is 1 if and only if block b is mined by time period t 1, or it is mined
−
in time period t˘with destination d˘such that the ‘actual’ destination d is d d˘.
≤
The full details associated with reformulating the objective and all relevant constraints is
documented in several places [101–103], and are shown in more detail in Section 5.3.2. The
137 |
Colorado School of Mines | Z Z 0 b ,ˆb ˆ ,t (5.17)
b,t,|D|
−
ˆb,t,|D|
≤ ∀ ∈ B ∈
Bb
∈ T
Z Z 0 b ,t ,d < (5.18)
b,t,d b,t,d+1
− ≤ ∀ ∈ B ∈ T |D|
Z Z 0 b ,t < (5.19)
b,t,|D| b,t+1,|D|
− ≤ ∀ ∈ B |T|
Equation 5.17 are the original precedence constraints which connect base blocks (b) mined by
a given period (t) mined to any destination (d = ) to the antecedent block (ˆb) also mined by
|D|
that period and also to any destination. Equation 5.18 links the ‘by’ destinations together, and
Equation 5.19 does the same with the time periods.
In the condensed block scheduling problem Equation 5.14 are all of the capacity constraints,
blending, uncertainty constraints, and everything else that does not amount to a simple
precedence constraint. The H matrix is an ( ) h matrix typically with coefficients
|B|×|D|×|T| ×| |
equal to grades or tonnages, corresponding to some right hand side(cid:126)h which is the column vector
of right hand sides. As all of these variables are based on the ‘by’ variables it may be first
necessary to apply the rules in Equations 5.9 to 5.11 to translate the constraints which are more
familiar to mine planning engineers.
Finally Equation 5.15 enforces the resource constraints which limit blocks to be mined not at
all, or exactly once.
5.1.3 Example Direct Block Scheduling Results
The outcome from either the at variable or by variable formulation for the block scheduling
problem is ultimately a plan which indicates both when blocks are mined, and to which
destination they are routed. To illustrate some of the most common operational concerns, which
must be addressed by incorporating operational constraints, consider the small synthetic dataset
in Figure 5.1
For this simple 2D dataset there are two possible destinations, the mill and the dump
although these are are not differentiated in the figures. There are three time periods with a
straightforward mining capacity of 300 blocks which are indicated by the three pit contours. The
blocks within the smallest pit contour are mined in period 1, between the smallest and second in
period 2, and between the second and the largest in period 3. Although this dataset is only a
synthetic 2D dataset it illustrates three operational concerns.
139 |
Colorado School of Mines | z
x
Figure 5.1 Synthetic 2D scheduling dataset. Left: the economic value of each block, darker is
higher. Right: A block schedule consisting of three phases
5.1.3.1 Sink Rate
The sink rate of an open pit mine is a limit on the vertical rate of advance that can realistically
be achieved in any given year. Typically an open pit mine may be limited to a maximum of six to
twelve benches a year which is a consequence of, among other things, the number of shovels in use
and the relative cost of developing access to those benches. Incorporating a sink rate into a block
schedule is simple with either at or by variables. Either force the variables corresponding to
blocks of unattainable z values to be zero or don’t generate them in the first place.
z
x
Figure 5.2 Example block schedule for the synthetic 2D scheduling dataset that satisfies a
maximum sink rate operational constraint
In the synthetic example a block schedule that incorporates an eleven bench sink rate is shown
in Figure 5.2. A useful byproduct of enforcing a maximum sink rate is that it can prevent
minimum mining width violations in certain circumstances, although this is not guaranteed.
140 |
Colorado School of Mines | 5.1.3.2 Minimum Mining Width Constraints
The final pit limits in a operationally feasible block schedule should also satisfy minimum
mining width constraints, for all of the same reasons discussed in Section 4.1. Incorporating
minimum mining width constraints into block scheduling problems is discussed in detail in
Section 5.2.
z
x
Figure 5.3 Example block schedule for the synthetic 2D scheduling dataset that satisfies a
minimum mining width constraint of six blocks
In Figure 5.3 the pit on the right satisfies minimum mining width constraints in the final pit
limit, but each phase individually does not satisfy a minimum mining width. It is possible to
require a suitable operating area at the bottom of each phase and this may be desirable in some
circumstances.
5.1.3.3 Minimum Pushback Width Constraints
A common characteristic of block schedules that do not consider operational constraints are
very small changes between some of the pit walls in between phases. This is evident even in the
synthetic 2D example, specifically along the west wall between phase 1 and 2, where the distance
between the pit walls is only a single block. This is a very poor plan from an operational
perspective because in order to start mining in an area large equipment must be relocated to that
area first, among other preparations, which can take a long time and incur substantial operating
costs. Because shovel movement is not typically considered directly in a long-range plan it should
at least be handled implicitly by precluding these sorts of configurations.
141 |
Colorado School of Mines | z
x
Figure 5.4 Example block schedule for the synthetic 2D scheduling dataset that satisfies a
minimum pushback width constraint of six blocks
In Figure 5.4 a minimum pushback width of six blocks is specified. A consequence of how this
is currently modeled is that it inherently satisfies minimum mining width constraints as well. In
this synthetic example the optimal answer was, somewhat unexpectedly, to push back the west
wall of the second phase instead of snapping the walls together. This just reiterates the
importance of using optimization methods to incorporate operational constraints when possible,
because the best set of changes may be nonobvious.
Unfortunately this constraint does not translate easily to current optimization approaches in
3D. The first concern is that there needs to be a very large number of constraints, on all blocks in
all phases which can overwhelm all commercial solvers and those developed within this chapter.
However, the second concern is that properly modeling large minimum pushback width
constraints can be very difficult because the block templates do not fit nicely together, and
interact destructively with the precedence constraints and the general shape of nested pits. An
initial formulation for minimum pushback width constraints is presented in the following sections,
but this does not work well in 3D.
5.1.3.4 Additional Operational Constraints
The most impactful operational constraint considered beyond the scope of this dissertation is
bench access. Main haulage ramps, drop cuts, and access roads are part of a very important set of
operational constraints with impacts on economic viability and safety. This dissertation does not
develop any ideas or formulations on this topic.
142 |
Colorado School of Mines | Additionally, besides enforcing minimum pushback width constraints, shovel movement and
scheduling is beyond the scope of this dissertation. This extends also to the type of plans that
satisfy minimum mining widths and pushback constraints but still have mining areas separated by
long distances. Integer programming formulations which consider this type of operational
constraint would potentially be very complicated, as they may have to incorporate a connected
components analysis in some form.
Finally, many operational constraints that are more within the realm of short range planning
are not considered. For example, the destinations chosen for each block should satisfy some form
of minimum mining width constraint as well. As mentioned in Section 2.1.4 this particular
operational constraint is generally handled during the short range grade control process, in part
because blast movement of the material should be considered. Incorporating more operational
constraints into a block scheduling problem necessarily reduces value, and will generally increase
computation time, and may lead to currently unmanageable levels of complexity.
5.2 Width Constraints in Block Scheduling Problems
The prior formulation for minimum mining width constraints, Section 4.2.1, naturally extends
to the block scheduling problem with either at or by variables. The fundamental concept of using
an auxiliary variable to represent a set of contiguous blocks which must be treated similarly can
be used directly for both minimum mining width constraints and minimum pushback width
constraints.
5.2.1 Minimum Mining Width Constraints with at Variables
Minimum mining width constraints in the block scheduling problem require that all areas of
the pit must be a part of an operationally feasible area. However, they do not need to all be
mined within the same phase.
For each width w with member blocks ¯b ¯ , as defined in Section 4.2.1, define an
w
∈ W ∈ B
auxiliary variable M . Then accounting for destinations ( ) and time periods ( ) define the
w
D T
assignment constraints as in Equation 5.20. And the enforcement constraints as in Equation 5.21.
143 |
Colorado School of Mines | (cid:88)(cid:88)
M
w
X¯b,t,d 0 w ,¯b ¯
w
(5.20)
− ≤ ∀ ∈ W ∈ B
d∈Dt∈T
(cid:88)
X M 0 b ,t ,d (5.21)
b,t,d w¯
− ≤ ∀ ∈ B ∈ T ∈ D
w¯∈W¯
b
Each assignment constraint, Equation 5.20, is no longer as simple as in the ultimate pit
problem because a width, w, may be assigned a value of 1 if its contained block ¯b is mined to any
destination in any time period. The enforcement constraints remain of a similar mathematical
form, however there are many more of them. It may be possible to reduce the number of
enforcement variables by requiring them only on blocks of the last phase, t = , but this is only
|T|
applicable in certain datasets, and may lead to minimum mining width violations in some cases.
5.2.2 Minimum Pushback Width Constraints with at Variables
With minimum pushback width constraints the number of auxiliary variables must increase,
because a block must be mined as a part of a operationally feasible area which is all mined within
the same period. Therefore the auxiliary variable will have two indices as M ,w ,t and
w,t
∈ W ∈ T
the assignment and enforcement constraints follow in Equations 5.22 and 5.23.
(cid:88)
M
w,t
X¯b,t,d 0 w ,¯b ¯ w,t (5.22)
− ≤ ∀ ∈ W ∈ B ∈ T
d∈D
(cid:88)
X M 0 b ,t ,d (5.23)
b,t,d w¯,t
− ≤ ∀ ∈ B ∈ T ∈ D
w¯∈W¯
b
5.2.3 Minimum Mining Width Constraints with by Variables
The by formulation also simplifies the assignment constraints when enforcing minimum mining
width constraints. It can be useful to think of a by variable as, this block is mined in this period
or any of the preceding periods, or, this block is mined to this destination or any of the previous
destinations. So the assignment constraints and enforcement constraints simply fall out from this
understanding in Equations 5.24 and 5.25.
M
w
Z¯b,|T|,|D| 0 w ,¯b ¯
w
(5.24)
− ≤ ∀ ∈ W ∈ B
(cid:88)
Z M 0 b (5.25)
b,|T|,|D| w¯
− ≤ ∀ ∈ B
w¯∈W¯
b
144 |
Colorado School of Mines | The by formulation therefore simplifies the assignment constraints back to being simple two
‘block’ precedence constraints and vastly reduces the number of enforcement constraints. Only
one enforcement constraint, on the last destination / time period, is necessary because this
variable will be 1 if and only if the block is mined to any destination in any time period.
5.2.4 Minimum Pushback Width Constraints with by Variables
Similar to at variables the number of auxiliary variables for pushback width constraints with
by variables is increased. However the same idea of using the built in ‘or’ interpretation of the by
variables is no longer possible, because this would not properly enforce minimum pushback width
constraints on earlier periods. A possible formulation for minimum pushback width constraints
follows in Equations 5.26 to 5.28.
M
w,1
Z¯b,1,|D| 0 w ,¯b ¯
w
(5.26)
− ≤ ∀ ∈ W ∈ B
M
w,t
Z¯b,t,|D|+Z¯b,t−1,|D| 0 w ,¯b ¯ w,t > 1 (5.27)
− ≤ ∀ ∈ W ∈ B
(cid:88)
Z M 0 b ,t (5.28)
b,t,|D| w¯,t
− ≤ ∀ ∈ B ∈ T
w¯∈W¯
b
5.3 Applying the Bienstock-Zuckerberg Algorithm
The Bienstock-Zuckerberg algorithm, Section 2.5.4, is the best current known approach to
solving the linear relaxation of the general block scheduling problem with by variables, and can
handle problems that are too large for conventional general purpose linear programming solvers
by taking advantage of the large network substructure. Additionally, there are at least two
approaches to constructing an integer feasible solution once the linear relaxation is identified. The
TOPOSORT heuristic from Chicoisne et al [104], and Aras’ procedure which modifies the BZ
algorithm and adds additional steps [103]. The TOPOSORT heuristic is limited to upper
bounded capacity constraints - and can not handle arbitrary constraints such as blending or lower
bounds on capacity. Aras’ procedure does not restrict the form of any of the side constraints.
The BZ algorithm is generally very performant because it dualizes all of the more general
constraints, Equation 5.14, and uses the pseudoflow algorithm to solve a sequence of constructed
subproblems. The relevant duals are determined by solving a linear master problem which uses
variables corresponding to aggregated collections of many blocks. These aggregated variables are
145 |
Colorado School of Mines | constrained to be orthogonal, such that each original variable is in exactly one aggregate.
The BZ master problem follows in Equations 5.29 to Equation 5.32.
maximize (cid:126)cVλ (5.29)
s.t. λ λ 0 (i,j) J (5.30)
i j
− ≤ ∀ ∈
HVλ (cid:126)h (5.31)
≤
0 λ 1 (5.32)
≤ ≤
In this matrix notation based representation V is the λ Z orthogonal 0,1 matrix which
| |×| |
specifies which Z variables are within each orthogonal pit variable λ. The helper list of
precedence constraints J in Equation 5.30 is the subset of original precedence and by constraints
as in Equations 5.17 to 5.19 that are necessary for the orthogonal aggregates. Equation 5.31 are
all of the original capacities, and other side constraints. The duals on this set of constraints π are
then used in the subproblem.
The BZ sub problem follows in Equations 5.33 to 5.35.
(cid:16) (cid:17)
maximize (cid:126)cZ π HZ (cid:126)h (5.33)
− −
s.t. Z Z 0 (i,j) I (5.34)
i j
− ≤ ∀ ∈
Z 0,1 (5.35)
∈
This subproblem can then be solved by taking the dual (yet again) and using pseudoflow. The
solution is then incorporated into V which creates many more orthogonal aggregates.
5.3.1 Operational Constraints in the BZ Algorithm
The formulation for minimum mining width constraints (Section 5.2.3) is well suited for
incorporation into the BZ algorithm. The auxiliary variables M fit nicely with the original Z
w b,t,d
variables, and the assignment constraints (Equations 5.24) are just another set of precedence
constraints, that can be incorporated into both the master problem and subproblem alongside all
the others. A minor concern arises with the enforcement constraints.
In Munoz et al’s review of the Bienstock-Zuckerberg algorithm they state:
146 |
Colorado School of Mines | The BZ algorithm is very effective when the number of rows in H and the number of
[additional] variables is small relative to the number of [Z] variables. [102].
This also arises from one of the central tenets in the BZ algorithm. Owing to the totally
unimodular nature of the main precedence constraint submatrix there will be at most (cid:126)h unique
| |
fractional values in the final optimal result which, in the worst case, would all need individual
orthogonal aggregations [101]. This is typically not a major concern, because there are generally
relatively few rows in the H matrix corresponding to the dual multipliers on the capacity,
blending, and similar side constraints. However, the enforcement constraints upset this balance
substantially – because there is an enforcement constraint on every block in every time period for
the minimum pushback width constraint, and even the negative valued blocks require enforcement
constraints owing to potential interactions with the side constraints. This increases the likelihood
of many unique fractional values in the linear relaxation solution which may lead to slower
convergence.
Fortunately some of the earlier evaluated examples considered in Section 4.6 do not exhibit
this worst case. The ultimate pit problem is a special case of the block scheduling problem where
the number of time periods and destinations are one, and there are no side constraints. The
number of unique values in this special case is exactly two (zero and one), but with the addition
of even hundreds of thousands of enforcement constraints the number of fractional values does not
increase substantially. The biggold 3x3 dataset, for example, has 1,411 binding enforcement
constraints at optimality of the 270,000 original enforcement constraints and only 953 unique
fractional values for the 285,000 partially mined blocks.
5.3.2 Example BZ Subproblem with Operational Constraints
The subproblem in the BZ algorithm with by variables for multiple time periods, multiple
destinations, and operational constraints can become very large and must be constructed
carefully. There are multiple sets of precedence constraints consisting of those inherent in the by
variable reformulation, those required to enforce geotechnical stability, and those owing to the
assignment constraints. In order to further explain how these precedence constraints must be
constructed and the overall nature of the subproblem, a small example follows in this section.
147 |
Colorado School of Mines | The example block model in Figure 5.5 consists of eight blocks: three on the lower bench and
five on the upper bench. Each of the three lower blocks depends on three blocks in the upper
bench indicated by the directed arcs. The numbers within the blocks are their respective block
indices. In addition, two sets of two blocks for operational constraints (blocks one and two, and
blocks two and three) are indicated with the dashed ellipses.
4 5 6 7 8
1 2 3
z
x
Figure 5.5 A small example block model used to illustrate the BZ subproblem
If each block is allowed to be mined in one of two time periods and routed to one of three
destinations, there are ultimately 8 2 3 = 48 individual block nodes in the subproblem. If the
× ×
operational constraints are initially ignored the subproblem follows in Figure 5.6. Now for each
block there are six nodes which are notated with a three digit number such that the first digit is
the original block index, the second digit is the time period index and the third digit is the
destination index.
421 422 423 521 522 523 621 622 623 721 722 723 821 822 823
411 412 413 551111 512 513 661111 661122 613 771111 771122 713 811 881122 813
112211 122 123 222211 222 223 332211 322 323
111111 111122 111133 221111 221122 221133 311 312 313
Figure 5.6 The base precedence constraints and block nodes in the BZ subproblem. Source and
sink arcs are omitted.
For example if block one is to be routed to the second destination in the second phase that
means that node 122 (the gray node in Figure 5.6) must be mined. This requires nodes 123, 423,
148 |
Colorado School of Mines | 523, and 623 to also be mined, which is essentially saying: If block one is to be mined in period
two, then blocks four, five and six must also have been mined by period two, to any destination.
This does not preclude mining those other blocks in earlier time periods or to other destinations,
the precedence constraints just say they must be mined by at least the same time period to any
destination.
One important aspect of the subproblem highlighted by the example is that the by variable
reformulation for multiple destinations increases the size of the subproblem unnecessarily. That
is, the nodes corresponding to destinations of lower index can be combined together into a single
node that takes the value of the maximum valued destination. This is an important optimization
which reduces the size of the subproblem substantially, and is further discussed in several
references [101–103].
When operational constraints are included additional nodes and precedence constraints are
required. For this example if minimum mining width constraints are enforced on the final pit
limits there are two additional nodes that must connect to 123 and 223, and 223 and 323
respectively as shown in Figure 5.7. Additionally in Figure 5.7 the destinations are collapsed into
a single node each.
42X 52X 62X 72X 82X
41X 51X 61X 71X 81X
12X 22X 32X
11X 21X 31X
M1 M2
Figure 5.7 BZ Subproblem with collapsed destination nodes and two minimum mining width
constraints. Source and sink arcs are omitted.
149 |
Colorado School of Mines | 5.3.3 Integerization
The Bienstock-Zuckerberg algorithm solves the linear relaxation of the direct block scheduling
problem which allows for partially mining blocks and creates inoperable schedules that are not
directly usable for downstream applications. Therefore it is necessary to create integer feasible
schedules through some additional means.
Chicoisne et al propose the TOPOSORT heuristic which uses the linear relaxation result as a
guide to round the partially mined blocks to integer values while respecting some of the original
constraints. Additionally they propose a local search heuristic that is used in combination with
the TOPOSORT heuristic to obtain integer feasible solutions.
Aras describes a procedure for computing an integer feasible solution following the application
of their modified BZ algorithm [103]. In practice their approach works well, and the gap between
the LP solution as computed by BZ and the IP is generally very small.
One potential avenue for future work is to use the BZ algorithm as part of a branch and
bound integerization process. Intelligently selecting the variables to restrict to integer values in
the branching process may allow for higher quality integer solutions although the sheer number of
variables could lead to problems. The orthogonal columns could be retained across levels of the
tree to allow for the master problem to be solved more efficiently without having to start over
from the beginning. This remains to be explored.
5.3.4 Implementation Details
The BZ algorithm can be implemented relatively efficiently in a computer program on top of
two major components: a solver for linear programs that provides dual values on relevant
constraints, and a flow based solver for solving the constructed ultimate pit problem instances.
The difficulty of the implementation is in ensuring that all relevant bookkeeping information is
routed correctly and the master and sub problems are constructed correctly. A prototype
implementation of the BZ algorithm using the Gurobi C++ application programming interface to
solve the master problem and MineFlow to solve the constructed subproblems was developed.
None of the reviewed discussions about the BZ algorithm describe the data structure used to
store the orthogonalized columns. A na¨ıve approach is not recommended as incorporating new
columns from the subproblem and computing the new orthogonalized pits can be a laborious
150 |
Colorado School of Mines | process. The partition refinement data structure, [141, 142], is one high quality data structure for
this component of a BZ implementation and can easily be extended to maintain the value of each
orthogonalized pit and all of the information required to create the master problem’s constraints.
The subproblem ultimate pit instances remain the same size throughout the decomposition
process, and relatively few block values are modified by the duals from the previous master
problem solution. Therefore it is important to use a solver that can re-use the previous solution’s
information to more rapidly compute the next.
However the size of the master problem does grow rapidly as additional columns are
incorporated and orthogonalized. In order to prevent the number of columns from reaching
unmanageable levels Bienstock and Zuckerberg propose a coarsification process which, when
necessary, replaces the collection of orthogonal aggregates with some smaller set that spans the
current solution by having only one orthogonal aggregate for each unique value of λ. Special care
must be taken to prevent cycling. Munoz et al suggest only applying this coarsification process on
iterations where the value of the objective (Equation 5.29) strictly increases [102].
Interestingly, this coarsification process did not yield improved convergence in the cases
considered here. Instead when coarsification was applied the process took many additional
iterations that obviated any runtime improvements realized by solving the master problem more
quickly with fewer variables. This process should be considered sparingly perhaps only when the
master problem reaches truly unmanageable levels, or only on those columns that are not a part
of the current best LP solution. Another possible explanation is that the problems considered
herein were not sufficiently sophisticated to necessitate the coarsification process. Problems with
more variables, more constraints, or more difficult types of constraints may benefit from the
coarsification process.
When possible it is generally worth seeding the master problem with a more useful initial set
of orthogonal columns. This includes splitting up all of the blocks by time period, destination,
and potentially even by bench. In some cases this lead to as much as a 50% reduction in run time
compared to beginning with all of the blocks in one single column. Determining the best
strategies for initializing, splitting up, and merging these columns is a ripe area for future
research especially in the context of complicated side constraints.
151 |
Colorado School of Mines | 5.3.4.1 MineLib Results
The MineLib library of test problem instances contains eleven constrained pit limit problems,
or ‘cpit’ problems, which are a special case of the general direct block scheduling problem [138].
The only side constraint considered in the constrained pit limit problems are resource constraints
as in Equations 5.5 and 5.6. These problems were used to verify the developed prototype BZ
implementation. The problem instances and results are tabulated in Table 5.1 and Table 5.2.
Table 5.1 Summary information of the MineLib ‘cpit’ problem instances.
Name Number of Number of Number of Number of Side
Blocks Precedence Phases Constraints
Constraints
newman1 1,060 3,922 6 12
zuck small 9,400 145,640 20 40
kd 14,153 219,778 12 12
zuck medium 29,277 1,271,207 15 30
p4hd 40,947 738,609 10 20
marvin 53,271 650,631 20 40
w23 74,260 764,786 12 36
zuck large 96,821 1,053,105 30 60
sm2 99,014 96,642 30 60
mclaughlin limit 112,687 3,035,483 15 15
mclaughlin 2,140,342 73,143,770 20 20
The linear relaxation objective value as computed by the prototype BZ algorithm deviate
slightly, by less than 1%, from the reported results in Espinoza et al [138]. Upon closer inspection
this is caused by the provided solution files from MineLib not always adhering to the capacity
constraints precisely. This may be due to inexact tolerances or numerical instability, as many of
the values in the MineLib dataset are large when considered in the context of floating point
numbers, especially when accounting for how the aggregation process sums many value and
tonnage coefficients together. Overall the results from MineLib and the prototype BZ
implementation for the linear relaxation are in agreement and these small discrepancies do not
have a big impact.
The prototype BZ implementation was extended to compute the IP feasible results as well,
and the results are summarized in Table 5.2. In all eleven cases the prototype BZ implementation
was able to find schedules with higher objective value than those reported by MineLib.
152 |
Colorado School of Mines | Table 5.2 IP results from applying the prototype BZ implementation to the Minelib ‘cpit’
problem instances.
Name MineLib Prototype BZ Percent Elapsed
Objective Objective Improvement time
newman1 23,483,671 24,176,579 3.0% 9s
zuck small 788,652,600 789,066,986 0.1% 40s
kd 396,858,193 402,485,039 1.4% 16s
zuck medium 615,411,415 618,075,337 0.4% 1m 22s
p4hd 246,138,696 247,089,680 0.4% 39s
marvin 820,726,048 822,163,289 0.2% 38s
w23 392,226,063 393,068,316 0.2% 2m 26s
zuck large 56,777,190 56,846,147 0.1% 14m 24s
sm2 1,645,242,774 1,647,879,436 0.2% 12m 32s
mclaughlin limit 1,073,327,197 1,075,862,841 0.2% 3m 4s
mclaughlin 1,073,327,197 1,075,930,704 0.2% 10m 37s
The prototype BZ implementation uses a combination of Aras’s approach and the
TOPOSORT heuristic to solve for the final integer feasible result. The orthogonal aggregates are
pre-seeded by bench, phase, and a few nested pits calculated without any side constraints before
calculating the linear relaxation result. At this stage the TOPOSORT heuristic is applied to
compute a high quality satisfying result, but is not taken as the final answer. This solution is
orthogonalized into the original aggregates and the whole set of columns is handed off to Gurobi
to compute the final integer feasible solution. In the largest MineLib example, mclaughlin, this
final IP had over 30,000 columns which is far less than the original 2,140,342 20 = 42,806,840
×
nodes. In practice this combined approach performs well.
5.4 Case Studies
The proposed methodology is applied to a small 3D example with three phases and the well
known McLaughlin gold deposit with three destinations and ten phases. Note that although the
name is shared, this McLaughlin dataset is subtly different than the one included in MineLib.
5.4.1 Small 3D Example
This small example was first used in Da˘gdelen 1985 [27]. The model is a synthetic, small, high
grade copper deposit with 5,400 blocks arranged in a 30 30 6 regular block model. Each block
× ×
measures 100 100 45 feet. The model contains 64 blocks of ore with an average grade of 3.7%
× ×
153 |
Colorado School of Mines | copper content. A three time period schedule is sought with capacities of 19 ore tons in the first
period, 21 ore tons in the second, and 24 ore tons in the third.
Da˘gdelen’s schedule, as computed with nine rounds of discounting block values, achieves a
NPV of $139,569 without operational constraints. The prototype BZ implementation, without
operational constraints, achieves a NPV of $140,660. With minimum mining width constraints
corresponding to 2 2 blocks, the NPV is reduced to $138,527, and with 3 3 minimum mining
× ×
width constraints the NPV is reduced to $134,536. Planar sections through the schedules are
given in Figure 5.8.
5.4.2 McLaughlin Dataset
The McLaughlin mining complex is simple, with three possible destinations for each block: a
mill, a leach pad, and a waste dump. The economic parameters used in this case study are carried
over from Aras, [103], and tabulated in Table 5.3.
Table 5.3 Economic parameters used in the McLaughlin case study. Same as Aras 2018 [103].
Parameter Value
Gold price 1,250 $/oz
Mill cost 12 $/t
Leach cost 6 $/t
Mill recovery 90%
Leach recovery 70%
Discount rate 12.5%
The first step is to compute the original ultimate pit limits using MineFlow. For the ultimate
pit limit the discount rate is not used and each block is assumed to be routed to the highest value
destination. No capacity constraints or other side constraints are considered, and for this first
calculation no operational constraints are included. The na¨ıve ultimate pit is shown in Figure 5.9,
it mines only 258,054 of the 2,847,636 input blocks and achieves an undiscounted value of $2.2
billion with these parameters and assumptions. Constant 45° precedence constraints using eight
benches of arcs were used.
The ultimate pit with minimum mining width constraints was also calculated using the
bounding procedure and the methodology developed in Chapter 4. For this case study the
ultimate pit satisfying a 5x5 minimum mining width reduces the contained undiscounted value by
154 |
Colorado School of Mines | $10 million and reduces the overall size of the pit by 3,225 blocks. This reduction in value
represents bringing the unrealistic original value closer to an actually attainable value.
The blocks within the mining width feasible ultimate pit were extracted and used for the
direct block scheduling procedure using the prototype Bienstock-Zuckerberg algorithm built on
Gurobi and MineFlow. The process capacities, which form the main side constraints, for each
period were taken from Aras 2018 and are tabulated in Table 5.4.
Table 5.4 Process capacity by time period. Same as Aras 2018 [103].
Time Period Mill Capacity (Tons) Leach Capacity (Tons)
1 1,500,000 1,500,000
2 1,750,000 1,750,000
3 2,000,000 2,000,000
4 2,750,000 2,750,000
5 3,000,000 3,000,000
6 3,000,000 3,000,000
7 2,750,000 2,750,000
8 2,000,000 2,000,000
9 1,750,000 1,750,000
10 1,500,000 1,500,000
With these 20 side constraints and no operational constraints the overall NPV of the project
as computed with the prototype BZ implementation is $1,485,402,500. The Linear relaxation has
a value of $1,489,951,000 for a gap of roughly half of one percent. Adding a 5x5 minimum mining
width constraint reduces the NPV to $1,484,989,000 but increases the compute time from five
minutes and eight seconds to 53 minutes and 24 seconds. Cross sections of both of these results
are shown in Figure Figure 5.10.
In this dataset the negative valued waste blocks are relatively low compared to the ore blocks,
and it is more economical to expand the pit in most areas to satisfy minimum mining width
constraints. Additionally, once the pit has already been expanded to satisfy minimum mining
width constraints additional areas become economic to recoup some of that cost. That is not
generally to be expected in all datasets.
155 |
Colorado School of Mines | CHAPTER 6
CONCLUSIONS
The main contribution of this dissertation is a methodology and program for solving the
ultimate pit problem with minimum mining width constraints, however contributions were also
made to the original ultimate pit problem and the block scheduling problem. The specific
structure of the ultimate pit problem allows for modest optimizations to the conventional
pseudoflow algorithm. The ultimate pit problem with minimum mining width constraints is now
within reach even for very large models with dozens of millions of blocks and hundreds of millions
of constraints. High quality solutions can be calculated very rapidly using the bounding
procedures developed herein along with the Lagrangian relaxation guided solver. And finally,
flexible formulations for the block scheduling problem with operational constraints alongside an
initial prototype BZ solver capable of solving realistic models were presented. Each of the main
contributions are summarized here, followed by ideas for future work, and final comments.
6.1 An Improved Ultimate Pit Solver – MineFlow
The ultimate pit problem remains a relevant problem in long range open-pit mine planning,
either as a standalone problem in the early stage of the project or as a subproblem in more
complicated optimization procedures or design applications. Solving for the ultimate pit as
quickly as possible is a worthwhile goal that benefits both academia and industry alike.
MineFlow, developed in Chapter 3, is a strong contender for the fastest ultimate pit solver
currently available, taking advantage of several important optimizations that are possible
specifically in the ultimate pit problem where only the minimum cut is desired and all arcs are of
a similar form. The notation developed in this chapter should also be of moderate pedagogical
value for those interested in solving for ultimate pits with pseudoflow. The software, which is
readily available online or from the author, should continue to prove to be an important and
useful tool for open-pit mine planning.
Additionally this chapter saw the development of useful ideas on how best to generate and
evaluate precedence constraints with a heavy emphasis on efficiency and accuracy. The
importance of changing the conventional paradigm from generating all precedence constraints
159 |
Colorado School of Mines | before solving into starting the solve immediately and only generating precedence constraints as
necessary was also highlighted.
Future work on this topic could be to evaluate the recent work from Chen et al in 2022 that
describe an algorithm for determining the maximum flow in near-linear time. Although there may
be challenges with creating a workable implementation, some of the ideas may translate into the
currently more practical methods. Additionally there may be room for further developments by
switching between a depth first and breadth first strategy for incorporating precedence
constraints that leads to normalized trees of higher quality with less splitting operations. This
could have a tangible impact on the overall solution time.
6.2 Ultimate Pit Problem with Minimum Mining Width Constraints
Incorporating minimum mining width constraints directly into the ultimate pit optimization
process is of the utmost importance in the early stages of long-range open-pit mine planning.
Ignoring these constraints leads to unrealistic pits which overestimate the value of any given
mineral deposit. These unrealistic pit values can lead to costly suboptimal decisions, and
unwelcome surprises during the manual design and refinement process later.
This chapter saw the development of a concise, simple, and powerful formulation for the
ultimate pit problem with minimum mining width constraints and several viable solution
approaches. An extensive computational comparison was completed in order to validate that the
work described in this chapter is actually applicable to a wide range of realistic datasets and
deposits.
Additionally, methods were developed to generate inner and outer bounding pits which vastly
reduce the size of the problem. This is necessary because it is shown that the ultimate pit
problem with minimum mining width constraints is -complete, which is a valuable result for
NP
future researchers looking to incorporate operational constraints into their open-pit mine planning
problems. This result helps to protect future researchers from spending fruitless efforts trying to
develop a polynomial time approach specifically for this problem.
Future work on the ultimate pit problem with minimum mining width constraints could be
focused on improved heuristics, or even a completely different formulation that has different
useful characteristics. The Lagrangian relaxation guided approach contains a step which evaluates
160 |
Colorado School of Mines | ‘nearby’ satisfying pits in an effort to generate higher valued solutions that do not immediately
fall out of the iterative process. This step deserves additional effort to improve its speed and its
ability to generate high quality nearby solutions.
The Bienstock Zuckerberg algorithm, when combined with the Lagrangian relaxation guided
solver, proved to generate the best result in all cases evaluated.
Finally, although the commercial branch and bound based optimizers were unable to make
much headway on the larger models, their abilities on the smaller models are promising and
perhaps combining the approaches developed in this chapter with the commercial solvers would
be useful. For example, the approaches developed herein could be used to provide so called MIP
starts, or additional bounding information, which could lead to a higher quality results faster.
6.3 The Block Scheduling Problem with Operational Constraints
The block scheduling problem is far more complicated than the single time period ultimate pit
problem, but also more realistic and potentially more useful. Formulations for incorporating
operational constraints, including minimum mining width constraints and minimum pushback
width constraints, were developed in Chapter 5 for the most common variable types.
Additionally, a prototype Bienstock Zuckerberg based solver was developed using Gurobi for the
master problem and MineFlow for the sub problem. This solver takes advantage of the nature of
the operational constraints and provide operationally feasible block scheduling solutions rapidly.
Future work for the block scheduling problem with operational constraints may include efforts
to create even more operationally realistic schedules that account for such concerns as bench
access. Formulations for these additional operational constraints are expected to be quite
complicated. Additionally, methods for further enhancing the Bienstock Zuckerberg algorithm
could be considered. Strategies for managing the orthogonal columns more effectively to balance
the time spent solving the master problems versus the overall convergence rate should be
investigated.
6.4 Final comments
In this dissertation a computationally efficient and flexible approach to incorporating
minimum mining width constraints into the ultimate pit problem was presented alongside modest
161 |
Colorado School of Mines | [14] William Lowrie and Andreas Fichtner. Fundamentals of geophysics. Cambridge university
press, 2020.
[15] Clayton V Deutsch, Andr´e G Journel, et al. Geostatistical software library and user’s guide.
Oxford University Press, 8(91):0–1, 1992.
[16] Edward H Isaaks and Mohan R Srivastava. Applied geostatistics. Oxford University Press,
1989.
[17] Andre G Journel and Charles J Huijbregts. Mining geostatistics. Blackburn Press, 1976.
[18] M Jamshidi and M Osanloo. Determination of block economic value in multi-element
deposits. In 6th International Conference in Computer Applications in the Minerals
Industries. Istanbul, Turkey, 2016.
[19] T Tholana, C Musingwini, and MM Ali. A stochastic block economic value model. In
Proceedings of the Mine Planners Colloquium 2019: Skills for the Future—Yours and Your
Mine’s, pages 35–50. The Southern African Institute of Mining and Metallurgy
Johannesburg ..., 2019.
[20] Helmut Lerchs and Ingo Grossmann. Optimum design of open-pit mines. In Operations
Research, volume 12, page B59, 1965.
[21] James W Gilbert. A mathematical model for the optimal design of open pit mines. PhD
thesis, University of Toronto, 1966.
[22] Michael P Lipkewich and Leon Borgman. Two-and three-dimensional pit design
optimization techniques. A decade of digital computing in the mineral industry, pages
505–523, 1969.
[23] T Chen. 3d pit design with variable wall slope capabilities. In 14th symposium on the
application of computers and operations research in the mineral industries (APCOM), New
York, 1976.
[24] Louis Caccetta and Lou Giannini. Generation of minimum search patterns in the optimum
design of open pit mines. AIMM Bull. Proc., 293:57–61, 07 1988.
[25] Reza Khalokakaie, Peter A Dowd, and Robert J Fowell. Lerchs–grossmann algorithm with
variable slope angles. Mining Technology, 109(2):77–85, 2000.
[26] Seyed-Omid Gilani and Javad Sattarvand. A new heuristic non-linear approach for
modeling the variable slope angles in open pit mine planning algorithms. Acta Montanistica
Slovaca, 20(4):251–259, 2015.
[27] Kadri Da˘gdelen. Optimum multi period open pit mine production scheduling. PhD thesis,
Colorado School of Mines, 1985.
164 |
Colorado School of Mines | [28] Boleslaw Tolwinski and Robert Underwood. A scheduling algorithm for open pit mines.
IMA Journal of Management Mathematics, 7(3):247–270, 1996.
[29] Georges Matheron. Le param´etrage des contours optimaux. Technique notes, 401:19–54,
1975.
[30] R Vallet. Optimisation mathematique de l’exploitation d’une mine a ciel ouvert ou le
problem de l’enveloppe. Annales des Mine de Belgique, pages 113–135, 1976.
[31] Kadri Da˘gdelen and Dominique Franc¸ois-Bongar¸con. Towards the complete double
parameterization of recovered reserves in open pit mining. Proceedings of 17th international
APCOM symposium, pages 288–296, 1982.
[32] Kadri Da˘gdelen. Cutoff grade optimization. Preprints-Society of Mining Engineers of
AIME, 1993.
[33] Kenneth F Lane. The economic definition of ore: cut-off grades in theory and practice.
Mining Journal Books London, 1988.
[34] Jeff Whittle. A decade of open pit mine planning and optimization-the craft of turning
algorithms into packages. Proceedings of the APCOM 99 symposium, 1999.
[35] I Isaaks, E. Treloar and T Elenbaas. Optimum dig lines for open pit grade control. In
Proceedings of Ninth International Mining Geology Conference 2014, pages 425–432. The
Australasian Institute of Mining and Metallurgy: Melbourne, 2014.
[36] K.P. Norrena Neufeld, C.T. and C.V. Deustch. Guide to geostatistical grade control and dig
limit determination. Guidebook Series, 1:63, 2005.
[37] M Tabesh and H Askari-Nasab. Automatic creation of mining polygons using hierarchical
clustering techniques. Journal of Mining Science, 49(3):426–440, 2013.
[38] Matthew Deutsch. A branch and bound algorithm for open pit grade control polygon
optimization. Proc. of the 19th APCOM, 2017.
[39] Louis Caccetta and Stephen P Hill. An application of branch and cut to open pit mine
scheduling. Journal of global optimization, 27(2-3):349–365, 2003.
[40] Matthew Deutsch, Eric Gonzalez, and Michael Williams. Using simulation to quantify
uncertainty in ultimate-pit limits and inform infrastructure placement. Mining Engineering,
67(12), 2015.
[41] AD Mwangi, Zh Jianhua, H Gang, RM Kasomo, and MM Innocent. Ultimate pit limit
optimization methods in open pit mines: A review. Journal of Mining Science, 56(4):
588–602, 2020.
165 |
Colorado School of Mines | [70] DCW Muir. Pseudoflow, new life for lerchs-grossmann pit optimisation. Orebody Modelling
and Strategic Mine Planning, AusIMM Spectrum Series, 14, 2007.
[71] Bala G Chandran and Dorit S Hochbaum. A computational study of the pseudoflow and
push-relabel algorithms for the maximum flow problem. Operations research, 57(2):358–376,
2009.
[72] Matthew Deutsch and Clayton V Deutsch. An open source 3d lerchs grossmann pit
optimization algorithm to facilitate uncertainty management. CCG Annual Report, 15, 2013.
[73] Thys B Johnson and William R Sharp. A Three-dimensional dynamic programming method
for optimal ultimate open pit design, volume 7553. Bureau of Mines, US Department of the
Interior, 1971.
[74] Y Zhao and YC Kim. New graph theory algorithm for optimal ultimate pit design.
Transactions-society of mining engineers of aime, pages 1832–1832, 1991.
[75] Yixian Zhao. Algorithms for optimum design and planning of open-pit mines. PhD thesis,
The University of Arizona, 1992.
[76] Ernest Koenigsberg. The optimum contours of an open pit mine: An application of
dynamic programming. 17th Application of Computers and Operations Research in the
Mineral Industry, pages 274–287, 1982.
[77] Shenggui Zhang and AM Starfield. Dynamic programming with colour graphics smoothing
for open-pit design on a personal computer. International Journal of Mining Engineering, 3
(1):27–34, 1985.
[78] F.L. Wilke and E.A. Wright. Ermittlung der gu¨nstigsten endauslegung von
hartgesteinstagebauen mittels dynamischer programmierung (determining the optimal
ultimate pit for hard rock open pit mines using dynamic programming). Erzmetall, 37:
138–144, 1984.
[79] E Alaphia Wright. The use of dynamic programming for open pit mine design: some
practical implications. Mining Science and Technology, 4(2):97–104, 1987.
[80] Jack Edmonds and Richard M Karp. Theoretical improvements in algorithmic efficiency for
network flow problems. Journal of the ACM (JACM), 19(2):248–264, 1972.
[81] EA Dinic. Algorithm for solution of a problem of maximum flow in a network with power
estimation. Soviet Math, 11:1277–1280, 1970.
[82] Valerie King, Satish Rao, and Rorbert Tarjan. A faster deterministic maximum flow
algorithm. Journal of Algorithms, 17(3):447–474, 1994.
[83] Andrew V Goldberg and Satish Rao. Beyond the flow decomposition barrier. Journal of the
ACM (JACM), 45(5):783–797, 1998.
168 |
Colorado School of Mines | [84] James B Orlin. Max flows in o (nm) time, or better. In Proceedings of the forty-fifth annual
ACM symposium on Theory of computing, pages 765–774, 2013.
[85] Jan van den Brand, Yin-Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol
Saranurak, Aaron Sidford, Zhao Song, and Di Wang. Bipartite matching in nearly-linear
time on moderately dense graphs. In 2020 IEEE 61st Annual Symposium on Foundations of
Computer Science (FOCS), pages 919–930. IEEE, 2020.
[86] Tuncel M Yegulalp and JA Arias. A fast algorithm to solve the ultimate pit limit problem.
In 23rd International Symposium on the Application of Computers and Operations Research
in The Mineral Industries, pages 391–398. AIME Littleton, Co, 1992.
[87] Ravindra K Ahuja and James B Orlin. A fast and simple algorithm for the maximum flow
problem. Operations Research, 37(5):748–759, 1989.
[88] Li Chen, Rasmus Kyng, Yang P Liu, Richard Peng, Maximilian Probst Gutenberg, and
Sushant Sachdeva. Maximum flow and minimum-cost flow in almost-linear time. arXiv
preprint arXiv:2203.00671, 2022.
[89] C Meagher, R Dimitrakopoulos, and D Avis. Optimized open pit mine design, pushbacks
and the gap problem - a review. Journal of Mining Science, 50(3):508–526, 2014.
[90] Roussos Dimitrakopoulos, CT Farrelly, and M Godoy. Moving forward from traditional
optimization: grade uncertainty and risk effects in open-pit design. Mining Technology, 111
(1):82–88, 2002.
[91] R Dimitrakopoulos and S Ramazan. Uncertainty based production scheduling in open pit
mining. SME transactions, 316, 2004.
[92] R Dimitrakopoulos, L Martinez, and S Ramazan. Optimising open pit design with
simulated orebodies and whittle four-x: A maximum upside/minimum downside approach.
Australasian Institute of Mining and Metallurgy Publication Series. Perth, Australia,
Australasian Institute of Mining and Metallurgy, pages 201–206, 2007.
[93] M Godoy and R Dimitrakopoulos. A multi-stage approach to profitable risk management
for strategic planning in open pit mines. In Orebody Modelling and Strategic Mine Planning
– Uncertainty and Risk Management International Symposium 2004. The Australasian
Institute of Mining and Metallurgy, 2004.
[94] Ady AD Van-Du´nem. Open-pit mine production scheduling under grade uncertainty.
Colorado School of Mines, 2016.
[95] Barry King, Marcos Goycoolea, and Alexandra Newman. Optimizing the open
pit-to-underground mining transition. European Journal of Operational Research, 257(1):
297–309, 2017.
169 |
Colorado School of Mines | [96] Kazuhiro Kawahata. New algorithm to solve large scale mine production scheduling
problems by using the lagrangian relaxation method, a. 2000-2009-Mines Theses &
Dissertations, 2006.
[97] Natashia Boland, Irina Dumitrescu, and Gary Froyland. A multistage stochastic
programming approach to open pit mine production scheduling with uncertain geology.
Optimization online, pages 1–33, 2008.
[98] Marcos Goycoolea, Daniel Espinoza, Eduardo Moreno, and Orlando Rivera. Comparing new
and traditional methodologies for production scheduling in open pit mining. In Proceedings
of APCOM, pages 352–359, 2015.
[99] Beyime Tachefine and Fran¸cois Soumis. Maximal closure on a graph with resource
constraints. Computers & operations research, 24(10):981–990, 1997.
[100] Atsushi Akaike. Strategic planning of Long term production schedule using 4D network
relaxation method. PhD thesis, Colorado School of Mines, 1999.
[101] Daniel Bienstock and Mark Zuckerberg. A new lp algorithm for precedence constrained
production scheduling. Optimization Online, pages 1–33, 2009.
[102] Gonzalo Mun˜oz, Daniel Espinoza, Marcos Goycoolea, Eduardo Moreno, Maurice
Queyranne, and Orlando Rivera Letelier. A study of the bienstock–zuckerberg algorithm:
applications in mining and resource constrained project scheduling. Computational
Optimization and Applications, 69(2):501–534, 2018.
[103] Canberk Aras. A new integer solution algorithm to solve open-pit mine production
scheduling problems. Colorado School of Mines, 2018.
[104] Renaud Chicoisne, Daniel Espinoza, Marcos Goycoolea, Eduardo Moreno, and Enrique
Rubio. A new algorithm for the open-pit mine production scheduling problem. Operations
Research, 60(3):517–528, 2012.
[105] Amina Lamghari, Roussos Dimitrakopoulos, and Jacques A Ferland. A variable
neighbourhood descent algorithm for the open-pit mine production scheduling problem with
metal uncertainty. Journal of the Operational Research Society, 65(9):1305–1314, 2014.
[106] W Brian Lambert, Andrea Brickey, Alexandra M Newman, and Kelly Eurek. Open-pit
block-sequencing formulations: a tutorial. Interfaces, 44(2):127–142, 2014.
[107] Jorge Amaya, Daniel Espinoza, Marcos Goycoolea, Eduardo Moreno, Thomas Prevost, and
Enrique Rubio. A scalable approach to optimal block scheduling. In Proceedings of
APCOM, pages 567–575, 2009.
[108] Jeff Whittle. 5.3 open pit optimization. In Bruce Kennedy, editor, Surface Mining, pages
470–475. Society for Mining, Metallurgy and Exploration, Incorporated, Littleton, 1990.
ISBN 0873351029.
170 |
Colorado School of Mines | [109] Christopher Wharton and Jeff Whittle. The effect of minimum mining width on npv. In
Optimizing with Whittle, pages 173–178. Whittle Programming Pty. Ltd Perth, Western
Australia, 1997.
[110] Peter Stone, Gary Froyland, Merab Menabde, Brian Law, Reza Pasyar, and PHL
Monkhouse. Blasor - blended iron ore mine planning optimisation at yandi, western
australia. In Roussos Dimitrakopoulos, editor, Orebody Modelling and Strategic Mine
Planning, pages 285–288, 2007.
[111] M Zhang. An automated heuristic algorithm for practical mining phase design. In 17th
International Symposium on Mine Planning and Equipment Selection, 2008.
[112] M Zhang. Applying simulated annealing to practical mining phase design. 34th Application
of Computers and Operations Research in the Mineral Industry. CIM, Vancouver, pages
266–273, 2009.
[113] Y Pourrahimian, H Askari-Nasab, and DD Tannant. Production scheduling with minimum
mining width constraints using mathematical programming. 18th International Symposium
on Mine Planning and Equipment Selection, 2009.
[114] Christopher Cullenbine, R Kevin Wood, and Alexandra Newman. A sliding time window
heuristic for open pit mine block sequencing. Optimization letters, 5(3):365–377, 2011.
[115] Mohammad Tabesh, Clemens Mieth, and Hooman Askari-Nasab. A multi–step approach to
long–term open–pit production planning. International Journal of Mining and Mineral
Engineering, 5(4):273–298, 2014.
[116] KRJA Systems DBA Maptek. Vulcan 10 help documentation, 2016.
[117] Jean Serra. Image analysis and mathematical morphology. Academic press, April 1982.
[118] Guillermo Juarez, Ricardo Dodds, Adriana Echeverr´ıa, Javier Ibanez Guzman, Mat´ıas
Recabarren, Javier Ronda, and E Vila-Echague. Open pit strategic mine planning with
automatic phase generation. In OREBODY MODELLING AND STRATEGIC MINE
PLANNING SYMPOSIUM. Proceedings... AusIMM, Perth (WA), pages 24–26, 2014.
[119] Xiaoyu Bai, Denis Marcotte, Michel Gamache, D Gregory, and A Lapworth. Automatic
generation of feasible mining pushbacks for open pit strategic planning. Journal of the
Southern African Institute of Mining and Metallurgy, 118(5):514–530, 2018.
[120] Iain Farmer and Roussos Dimitrakopoulos. Schedule-based pushback design within the
stochastic optimisation framework. International Journal of Mining, Reclamation and
Environment, 32(5):327–340, 2018.
[121] Matthew Deutsch. Open-pit mine optimization with maximum satisfiability. Mining,
Metallurgy & Exploration, 36(4):757–764, 2019.
171 |
Colorado School of Mines | APPENDIX A
THE ULTIMATE PIT PROBLEM WITH MINIMUM MINING WIDTH CONSTRAINTS IS
NP-COMPLETE
Efficient algorithms exist to solve many different computational problems such as finding the
shortest path through a graph, sorting large arrays, and solving the ultimate pit problem. In
Section 2.2.3 a straightforward means by which the ultimate pit problem can be transformed into
a max-flow / min-cut problem was described. This transformation can then be used with a wide
range of algorithms, such as the pseudoflow algorithm, to obtain solutions quickly. These
algorithms can obtain the solution in a number of steps which can be expressed as a polynomial
in terms of the size of the input, and are therefore reasonably fast even as the problem size
increases. Problems of this sort are said to be in , and there exist algorithms to solve them with
P
a deterministic Turing machine in polynomial time.
It would be convenient if the ultimate pit problem with minimum mining width constraints
could also be solved in polynomial time, however the reduction described in this appendix shows
that this is not currently possible. Following this result it can only be declared that there is
currently no known polynomial time algorithm for this problem, which is a useful theoretical
result with some practical ramifications. Specifically this result places the ultimate pit problem
with minimum mining width constraints among the -complete problems which can only be
NP
solved by a non-deterministic Turing machine in polynomial time. It is not yet known if there is a
algorithm that could solve this class of problems in polynomial time. This is a long-standing open
problem commonly referred to as the vs. problem, which is not considered here.
P NP
It is now known that the heuristic methods described in Chapter 4 are not going to be
obviated by a clever reformulation of the problem into a pre-existing graph problem or something
similar. If a clever trick existed to solve our problem in polynomial time it would also be able to
solve all these other, much more heavily researched, problems as well.
The argument presented herein centers around showing that it is possible to transform an
arbitrary 3-SAT problem into the ultimate pit problem with minimum mining widths. With this
polynomial time reduction one can confidently say that if there were a very fast algorithm for the
174 |
Colorado School of Mines | ultimate pit problem with minimum mining widths there would also have a very fast algorithm
for 3-SAT. One could take a 3-SAT problem, transform it using this process, solve that efficiently,
and report back the answer.
The reduction described in this appendix modifies the reduction given by Fowler, Paterson,
and Tanimoto in 1981 for the planar geometric covering problem where the goal is to determine
whether some set of geometric objects can completely cover another set of points in the plane
[143]. The primary difference between Fowler et al’s reduction and this reduction is that in Fowler
et al the goal was to limit the number of geometric shapes used, which roughly correspond to
mining width sets. In the ultimate pit problem with minimum mining width constraints there is
no restriction on the number of mining width sets. This reduction retains the character of their
idea by embedding the entire problem in a plane of negative valued blocks but must contend with
additional complications.
Section A.1 describes the 3-SAT problem which is a special case of the well known
satisfiability problem. Section A.2 then strips away all of the nonvital elements of the ultimate pit
problem with minimum mining width constraints. Specifically the precedence constraints are
completely removed and the problem is transformed into a decision problem instead of an
optimization problem. Finally, Section A.3 provides the polynomial time reduction from 3-SAT
which yields the desired result.
A.1 3-SAT
3-SAT is a special case of the Boolean satisfiability problem; or SAT, which was the original
problem shown to be -complete [144]. Satisfiability is the problem of determining whether
NP
there is an assignment of values (true or false) to a set of Boolean variables which satisfies all the
clauses of a particular formula in conjunctive normal form. Formulas in conjunctive normal form
are expressed as a conjunction of disjunctions, or an ‘and’ of ‘or’s. In the 3-SAT special case these
disjunctions consist of exactly three distinct literals. SAT formulas, with disjunctions of any
length, can be transformed easily into 3-SAT instances although the details are not relevant here.
An example 3-SAT formula, φ, follows in Equation A.1. Each Boolean variable is indexed
from the set of n Boolean variables X, as x ,x ,...,x . In this small example n is equal to five.
1 2 n
Each clause, indexed from the set of m clauses , as C ,C ,...,C , is a disjunction of three
1 2 m
C
175 |
Colorado School of Mines | literals formed from those variables. This example has eight clauses the first of which is given as
C = (x x x ). This implies that in order for clause one to be satisfied at least one of the
1 1 2 3
∨ ∨¬
following is true: x is assigned true, x is assigned true, or x is assigned false. The symbol
1 2 3
∨
stands for ‘or’ and the symbol is the negation operator. The remaining seven clauses are joined
¬
with C with the operator which means ‘and.’
1
∧
φ =(x x x ) ( x x x )
1 2 3 1 2 4
∨ ∨¬ ∧ ¬ ∨¬ ∨¬ ∧
(x x x ) ( x x x )
1 2 5 1 3 4
∨¬ ∨¬ ∧ ¬ ∨ ∨¬ ∧
(x x x ) (x x x )
1 3 5 1 4 5
∨¬ ∨ ∧ ∨¬ ∨ ∧
(x x x ) ( x x x ) (A.1)
2 4 5 3 4 5
∨ ∨ ∧ ¬ ∨ ∨¬
The 3-SAT instance in Equation A.1 is satisfiable. For example assigning the following values
to each variable, where 1 is true and 0 is false, satisfies all eight clauses. x 1, x 0, x 1,
1 2 3
← ← ←
x 1, x 0.
4 5
← ←
A great many problems can be specified with this seemingly restrictive set of rules including
the well known vertex cover problem and graph coloring problem. It is straightforward to
transform many problems into 3-SAT problems so a fast 3-SAT algorithm is highly sought after.
However, being able to transform arbitrary problems into 3-SAT, or more general satisfiability
problems clearly does not mean that the input problem is difficult. Instead if a problem is meant
to be shown to be -complete it must be shown that any arbitrary 3-SAT problem can be
NP
turned into an instance of that problem in polynomial time.
3-SAT is not an optimization problem and does not aim to maximize or minimize some
objective function, although such variants do exist. Instead 3-SAT is a decision problem which
only results in a yes or a no; the formula is satisfiable or not satisfiable. It is straightforward to
transform an optimization problem into a series of decision problems. The general idea is to first
solve the problem without the objective constructing a feasible solution, then if the solution exists
proceed by introducing clauses which force the new objective value to exceed the previous
objective value by some amount (if maximizing). This constructs a new decision problem which
asks if a better result exists. If there is no satisfying solution then an upper bound on the
objective has been determined which can be used to refine the working decision problem until the
optimal solution is found.
176 |
Colorado School of Mines | A.2 The Simplified Operational Ultimate Pit Decision Problem in the Plane
It is sufficient to show that a simplified version of the ultimate pit problem with minimum
mining width constraints is -complete because any algorithm capable of solving the
NP
unsimplified version would also have to solve the simplified version. Therefore it is possible to
completely ignore precedence constraints and turn the problem into a two dimensional planar
problem of identifying mineable groups of blocks. The valid mining width sets are restricted to
3x3 sets of blocks2 and the problem is formulated as a decision problem.
-2 -1 -2 -2 -1 0 1 -2 -4 -2 -1 -2 -2 -1 0 1 -2 -4
-2 9 2 -1 2 3 -2 -4 -2 -2 9 2 -1 2 3 -2 -4 -2
-3 -2 -1 2 -4 1 -1 3 -2 -3 -2 -1 2 -4 1 -1 3 -2
-5 -4 2 -2 3 1 -2 -1 -3 -5 -4 2 -2 3 1 -2 -1 -3
1 2 -3 2 1 5 3 -4 -5 1 2 -3 2 1 5 3 -4 -5
0 1 4 2 -1 -4 -5 -6 -4 0 1 4 2 -1 -4 -5 -6 -4
-1 -3 0 -1 1 2 -4 -7 -2 -1 -3 0 -1 1 2 -4 -7 -2
-2 -1 0 2 2 0 -2 -2 -3 -2 -1 0 2 2 0 -2 -2 -3
y
x
Figure A.1 Example simplified operational ultimate pit decision problem in the plane. Numbers
in blocks are the EBV. If the requested lower bound is less than 28 the set of shaded blocks on
the right is a valid selection corresponding to a ‘yes’ answer to the decision problem.
Given a 2D planar cross section through a regular block model of size n by n with integral
x y
economic block values for each block as v , and a single scalar lower bound on total value V; we
x,y
seek an assignment X of either 1 or 0 to each block such that the total value exceeds the lower
x,y
bound (Equation A.2) and each mined block is a part of at least one 3x3 square of mined blocks.
A small 9 8 example is shown in Figure A.1.
×
n (cid:88)x−1n (cid:88)y−1
v X V (A.2)
x,y x,y
≥
x=0 y=0
2It is possible that the 2x2 and 2x3 cases are also NP-complete - however that is not shown in this appendix.
177 |
Colorado School of Mines | A.3 The Ultimate Pit Problem with Minimum Mining Width Constraints is
-Complete
NP
Theorem 1. The Ultimate Pit Problem with Minimum Mining Width Constraints is
-complete.
NP
Proof. A polynomial-time reduction of 3-SAT to the simplified ultimate pit problem with
minimum mining width constraints is given. A 3-SAT formula with N variables and M clauses is
encoded into a 2D grid of size (M) (N) with mining width sets of size 3 3. Additionally a
O ×O ×
value V is provided such that this value can be achieved if and only if it is possible to satisfy the
input formula.
At a high level the reduction involves representing each of the input variables as a ‘wire’
formed from large positive valued blocks embedded in a predominantly negative valued block
model slice. Each wire is constructed as a loop consisting of an even number of high value blocks
which have exactly two possible maximum valued solutions. The two parities correspond to
assigning a value of true or false to the input variable in the satisfiability formula.
Each wire is then carefully attached to specific clause points following the input 3-SAT
formula. The clause points are constructed such that there are exactly seven maximum valued
solutions corresponding to one, or more, of the three Boolean variables being true. That is, only
the solution where all three variables are false has a lower value than the seven others.
For clauses where a particular variable appears in its negated form it is necessary to flip the
parity of the wire before it enters the clause point. It is also necessary to cross wires over one
another in the plane strictly maintaining the parity of each wire. The following sections describe
each of these individual components before showing how to connect them all together and
complete the construction.
In all the following examples and figures the numbers within blocks are the economic block
value. The hatched blocks have a very large negative value, for example -9,999, which completely
removes the possibility of mining those blocks. They could also be removed from the problem.
The dark shaded blocks are the mined blocks within each solution.
178 |
Colorado School of Mines | A.3.1 Wires
In Figure A.2 we see an example wire. It is straightforward to see that each of the two
maximum valued solutions have the same value (in this case 344), and vitally these are the only
two solutions which have a value of 344. Any other configuration of mined blocks would
necessarily mine additional negative valued blocks and reduce the value, so if a solution is sought
with a value of at least 344 the answer will be ‘yes’ with one of these two outputs. It is possible to
extend the wire either horizontally or vertically by inserting additional rows and columns
provided they follow the pattern.
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 25 -1 25 -1 25 -1 25 -1 25 -1 25 -1 -1 -1 -1 25 -1 25 -1 25 -1 25 -1 25 -1 25 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 25 -1 -1 -1 -1 25 -1 -1 25 -1 -1 -1 -1 25 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 25 -1 -1 -1 -1 25 -1 -1 25 -1 -1 -1 -1 25 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 25 -1 25 -1 25 -1 25 -1 25 -1 25 -1 -1 -1 -1 25 -1 25 -1 25 -1 25 -1 25 -1 25 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
Figure A.2 The wire concept in the -completeness proof. Each of the two solutions (left and
NP
right) have the same value and correspond to the two possible assignments (true and false).
A useful schematic representation for this wire is given in Figure A.3. In this representation
the high value blocks are represented as nodes. Arcs are present between nodes if it is possible to
place a 3x3 mining width and mine both nodes, and in these examples the bolded arcs correspond
to mining width sets that are mined in a particular solution.
Figure A.3 The wire in Figure A.2 as a schematic instead of explicit block values. Bolded arcs
correspond to mined mining width sets in the equivalent solutions.
179 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.