University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Colorado School of Mines
|
[55][56]. These programs use empirical relationships to determine a pillar stability factor
which indicates the likelihood of unstable failure. The sophistication of these programs lies
in the extensive collection of case histories of pillar failures in U.S. underground coal mines.
The finite difference programs FLAC2D and FLAC3D, are highly versatile in definition
of geometry, boundary conditions, and material properties [41, 42]. Each program includes
numerous material models suitable for simulating a variety of geomaterials. Notable for rock
is the strain softening plasticity model, which has been used to simulate yielding coal pillars
in longwall mining [4]. The program has also been used to model stable and unstable failure
modes in laboratory and in situ conditions [26].
2.3.2 Simulating Unstable Failure in Underground Mining
Specific to the problem of failure mode in underground mining, several sources can be
cited that deal with unstable failure. In 1983, Zubelewicz and Mroz used a finite element
model to study the violent failure of rock in various underground situations [95]. First, the
static equilibrium is achieved, then the full equations of motion are solved explicitly after a
disturbance is applied to the system. Kinetic energy is monitored, and if the energy increases
drastically, the failure is considered to be unstable. Bardet created a finite element model to
investigate surface buckling as a trigger for unstable failure [6]. These researchers claimed,
by citing bifurcation theory, that instability can be detected when the stiffness matrix of
the finite element grid becomes singular. The moment of instability was determined by
finding the time step when eigenvalues of the stiffness matrix become negative. Muller
followed up this study in 1991 with a comparison of explicit and implicit numerical methods
in modeling unstable failure [59]. The author performed simulations in ANSYS, an explicit
finite element program, and FLAC, an implicit finite difference program. Muller concluded
that ANSYS was unable to represent the instability, but FLAC was successful by responding
to instabilities with increases in local unbalanced force [2]. In their 1996 publication, Oelfke
et al. presented a combined DEM-FEM code applicable to underground mine deformability
[60]. The authors introduced the concept of mine instability as a function of local mine
20
|
Colorado School of Mines
|
stiffness and noted that the program could detect unstable failure as a divergence of the
solution. Another group of researchers investigated the effect of a fractured rock mass as a
loading system [14]. In this study, Chen et al. used a finite element model, called RFPA2D,
to study the behavior of microseismicity during unstable failure [19]. They loaded a double
rock sample in displacement control and monitored acoustic emission events. The authors
claim that unstable failure can be detected as sudden changes in the microseismic rate, and
demonstrated this with a realistic loading system with finite strength. In a 2009 study, Tan
et al. suggested that unstable failure of pillars could be modeled using a discrete element
model composed of two dimensional circular elements [81] These researchers used particle
velocity to describe the intensity of failure. In a study using the program FLAC3D, Jiang
et al. defined a term called the local energy release rate (LERR) that they claim can be
used to describe the intensity of failure [46]. The LERR is the difference in stored strain
energy in an element before and after failure. The authors compared LERR computed from
simulations to known cases of unstable failure and showed that comparisons of magnitude
of LERR were the same as the comparisons of intensity for the observed cases. Although,
they stated that it is not possible to determine at what value of LERR an unstable failure
occurs.
A publication by Larson and Whyatt reviewed available stress analysis tools for under-
ground coal mining [53]. They compared the use of three numerical models in simulating
a deep western coal mine with strong, stiff overlying strata: ALPS, an empirical model,
LaModel, a boundary element method program, and FLAC2D, a finite difference method
program (FDM). In their study, they showed that FLAC2D was able to model the sudden
collapse of the mine entry due to failure in the roof and floor while LaModel and ALPS do
not possess this ability due to the assumption of elastic overlying strata. Following this,
Esterhuizen et al. presented a method to determine the ground response curve in FLAC
models, and showed that the span to depth ratio has a notable effect on the ground response
curve [23].
21
|
Colorado School of Mines
|
In some cases, strong consideration is given to how the structure of the rock mass is
an integral part of its material behavior. Typically, the addition of joints, joint sets, and
bedding planes brings an element of realism to the model. The computer programs UDEC
and 3DEC allow the user to insert joints and joint sets with a variety of joint constitutive
behaviors in 2d or 3d geometries respectively [45]. Barton 1995 presents an evaluation of the
influence of joint properties on rock mass models that contain systems of joints [8].
2.3.3 Discrete Element Modeling Techniques and Applications
One other popular method of modeling rock is the discrete element method or DEM [18].
The two-dimensional method uses a collection of discs to simulate a granular material by
detecting contact between the discs and calculating subsequent motion due to contact forces.
Spheres can be substituted for discs to create a three-dimensional model, and if the elements
are bonded together a solid material can be simulated [68]. DEM is powerful as a numerical
modeling method because the user does not input a constitutive material law. Rather, the
user specifies a set of micro-parameters that define stiffness and strength of the discs and
bonds.
Calibration of material behavior by selecting the appropriate micromechanical param-
eters is an area that has widely been explored, but still needs much improvement. When
calibrating a DEM model it is advantageous to understand the effects of changing micro-
parameters on macro behavior. Potyondy and Cundall provided the initial guidance for
choosing micro-parameters in their seminal paper [68]. Since then, several researchers have
published papers on material calibration and sensitivity of macroscopic behavior on various
micro-parameters. Kulatilake et al. demonstrated an iterative calibration scheme for rock
behavior up to the strength at various levels of confinement [52]. Initial stiffness values were
calculated from equations provided by the PFC3D manual and adjusted after testing the
sample based on results for overall sample strength, elastic modulus, and Poisson’s ratio
[44].
22
|
Colorado School of Mines
|
A pervasive drawback of modeling rock failure with DEM is the underestimated angle
of internal friction and the low compressive to tensile strength ratio. Fakhimi attempted
to calibrate these behaviors by using a technique where the particle assembly is slightly
overlapped at all contacts and then normal forces are zeroed [24]. It is thought that the
increased contact frictional force in absence of the normal contact force would increase the
overall internal friction angle. While the internal friction angle and compressive to tensile
strength ratio both improved, the modified DEM still yielded unrealistic values. In 2007,
Fakhimi and Villegas published a dimensional analysis of DEM micro-parameters that rein-
forced the importance of the sample genesis pressure to the material failure envelope [25].
Koyama and Jing showed the effect of model scale and particle size on the macro behavior of
the sample and outlined a method to determine the representative elementary volume for a
given set of micro-parameters [51]. Cho et al. introduced the idea that by clumping particles
together into irregular shapes, one can improve the simulation of failure behavior in terms
of the failure envelope and tensile to strength ratio [15]. Yoon suggested that by selective
design of experiment, Plackett-Burman in this case, one can optimize the micro-parameters
using sensitivity analysis [91]. This method results in reliable parameter selection for rock
materials within ranges not applicable to the study of coal and only up to the point of failure.
Hsieh et al. demonstrated the effect that complex arrangements of various types of
particles with particularly defined contact parameters can affect deformability and strength
behavior [34]. Wang and Tonon produced a sensitivity analysis that developed equations
relating micro-parameters to sample deformability and strength [87]. Shopfer et al. showed
the effect of sample porosity and initial crack density on material behavior up to peak
strength [78]. Up to this point no researchers had dealt with the calibration of the post-peak
behavior until 2011 when Garvey and Ozbay offered a method to calibrate deformability,
strength, and post-peak modulus [27].
Despite difficulties in calibration, it has been demonstrated in numerous papers that
with proper micro-parameter calibration, realistic rock properties emerge [31, 58]. Some
23
|
Colorado School of Mines
|
of these properties are elasticity, fracturing, anisotropy with accumulated damage, dilation,
strength increase with increased confinement, post-peak softening and hysteresis. Modeling
rock behavior with DEM has three limitations worth noting. The measured Mohr-Coulomb
friction angle is roughly half of its expected value, the Mohr-Coulomb failure envelope is
linear, and the compressive to tensile strength ratio of the material is lower than the real
rock [21, 67, 68]. It has been proposed that by introducing non-circular elements, the effects
of these limitations are greatly reduced [15].
Many strategies are available to tailor a DEM model to most efficiently achieve its goal.
Within bonded particle models, user defined contact laws have a notable effect on the overall
behavior of the model. Resulting macro behaviors include time-dependent stress corrosion
and sliding along pre-existing joints [58, 66]. Also, heterogeneity in rock can be modeled by
enclosing groups of similarly strong or stiff particles with smooth contacts to create a larger
grain [69]. The grain-based model is well suited for modeling cases that involve spalling or
for instances where inter-granular and intra-granular cracking are pertinent features of the
failure process. The method of modeling a rock mass by embedding a system of joints can be
achieved in DEM by replacing bonds with smooth contacts along predetermined joint planes.
The synthetic rock mass (SRM) approach was developed by Mas Ivars et al [58]. Pierce et al.
showed that it satisfactorily predicted rock mass brittleness by validating fracturing in a case
study block cave mine Pierce et al. [65]. The SRM approach has also been used to address
scaling issues associated with using DEM to model rock masses. When DEM material is
calibrated to intact rock properties, the well-known effect of strength degradation due to
increased scale does not occur. Deisman et al. and Esmaieli et al. showed that an SRM
model can simulate the scale dependency of macroscopic behaviors in a coal bed methane
reservoir and an underground metals mine in Canada [20][22].
DEM models are well suited to model micro-mechanical behavior of rock such as notching
[68]. Although, DEM is notorious for high run times, so it is unreasonable to construct large
models out of relatively small particles. One reason is that equilibrating such large a system
24
|
Colorado School of Mines
|
CHAPTER 3
EVALUATION OF TWO DEM MODELS FOR SIMULATING UNSTABLE FAILURE IN
COMPRESSION
In order to simulate unstable failure using a numerical model, the post-peak behavior of
the simulated material must have a softening characteristic. In this chapter, two different
discrete element models are compared to determine which is better suited for simulating
unstable failure in compression. The two models are described in detail then subjected to
a series of compression tests. In each test, the same geometry of specimen is brought to
failure under compressive loading. The suite of tests was chosen to investigate the effect of
key loading conditions imposed on the DEM by in situ models of later chapters.
Four different test procedures are used to investigate the behavior of each DEM. The
first test is used to establish the so-called characteristic material behavior, which refers to
the rock specimen deforming incrementally under gradually increased loading. The uniaxial
compressive strength test, UCS, is used to establish characteristic material behavior for the
purpose of this research. The other three tests that are used to investigate the effect of three
separate loading conditions include: triaxial compressive strength test (TCS), elastic platen
compression test (EPC), and loading rate compression test (LRC).
The UCS and TCS model tests use a constant velocity boundary condition to load the
specimen where no strain energy is available from the loading system to affect the specimen’s
failure mode. To investigate the performance of DEM and also the failure modes of rocks,
elastic end platens are placed on top and bottom of the specimen. In this series, the platens
can store strain energy, which can be released during specimen failure in a gradual or sudden
manner depending on the rock’s failure mode. In a specific test series called elastic platen
compression, EPC, the platen stiffness is gradually reduced to observe the failure mode
changing from being stable to unstable. Lastly, a test is implemented to investigate the
26
|
Colorado School of Mines
|
effect of loading rate on the characteristic material behavior of the DEM code. A series of
UCS tests is conducted in which the loading rate is changed to a different value for each
test. By comparing the resulting stress-strain curves, the effect of loading rate on elastic
behavior, strength and post-peak behavior is analyzed.
3.1 Particle Flow Code in Two Dimensions (PFC2D)
In this study, the DEM modeling studies are performed by using the commercially avail-
able discrete element code PFC2D [43]. These codes allow for model customization via
an embedded programming language called FISH. PFC2D comes with a collection of FISH
functions that allow the user to accomplish complicated tasks with a moderate amount of
background knowledge in discrete element modeling. The authors of PFC2D call the prein-
stalled collection of fish functions the Fishtank. The Fishtank contains a series of functions
that generates a discrete element model of a bonded rock-like material and performs tests
to determine material properties. Templates are provided in the Fishtank that link together
steps in a study such as material generation, testing, then data display. Templates can be
run as examples or by providing custom inputs to the template, user specific tasks can be
easily performed. Fishtank version 1-115 was used in this research. It is often necessary
to modify the provided test procedures and functions for user specific purposes. Template
files used in material generation, testing or function definition will be referenced along with
customized inputs. Files containing modified test procedures or custom functions will be
provided in the appendices.
3.2 Material Generation and Calibration
This study utilizes the material generation procedure described in the PFC2D manual
within the section entitled PFC Fishtank. Circular elements, or particles, are created within
avesselandelementradiiarevarieduntilatargetisotropicstressisachieved. Then“floating”
particles (particles with a number of contacts below a predefined threshold) are deleted and
theelementsarebondedusingeitherparallelorcontactbonds. Here, periodicboundariesare
27
|
Colorado School of Mines
|
used to create a square vessel, resulting in square blocks of material that can be connected
seamlesslytocreatealargerassembly. Thissocalledpbrickmethod isdecried indetail inthe
PFC2D manual under the topic of adaptive continuum/discontinuum (AC/DC) logic. For
material generation and specimen assembly, the Fishtank template acdc-2d is used. The file
acdc-pbr.dvr is used to generate the pbrick, and the file acdc-bv.dvr is used to assemble
pbricks into the specimen.
The behavior of the generated material is largely determined by behavior of particle con-
tacts, called the contact model, and the type of bond used to attach the particles to one
another. The combination of contact model and bonding scheme make up the constitutive
model of the discrete element model. Both the contact model and the bond are defined
by a set of microparameters governing stiffness and strength properties. For this study an
appropriate constitutive model must be chosen and calibrated for the purpose of simulating
unstable failure. It is necessary that the constitutive model is capable of simulating a soft-
ening post-peak characteristic. Two constitutive models available to PFC2D users that are
capable of simulating a softening post-peak characteristic are the parallel bonded particle
model and the displacement softening model.
Figure 3.1 shows the components of a discrete element model constitutive model. The
contact model can contain a bond in the form of a contact bond and a parallel bond can be
added. Elastic contact behavior in both constitutive models described below is the same.
Figure 3.2 shows a schematic of the components of stiffness between two particles that are
in contact. Each particle is assigned a value for normal stiffness, k , and shear stiffness, k .
n s
Force between the particles is calculated using Hooke’s law, Equation 3.1, where i represents
values associated with either the normal or shear direction. The stiffness coefficient,K , is
i
calculated by assuming the element stiffnesses to act in series. Therefore, contact stiffness is
calculated using Equation 3.2, where A and B represent the particles involved in the contact.
F = K dx (3.1)
i i i
28
|
Colorado School of Mines
|
also fail in direct tension or due to bending. After a bond breaks, newly detected contacts
obey the laws of the contact model.
Failure in the parallel bond is dependent upon the geometry of the contacting particles
and the cross sectional area of the bond. According to beam theory, the maximum shear and
tensile stresses that can exist in the bond are Equation 3.3 and Equation 3.4 respectively.
F
s
τ = (3.3)
max
A
F M R
n
σ = − | | (3.4)
max
A − I
F and F are the shear and normal forces on the bond, M is the moment, and A = 2R, the
s n
cross-sectional area of the bond with unit thickness and I is the bond moment of inertia.
In this study, success of the calibration of the BPM relied on achieving a softening post-
peak characteristic. Until recently, no information existed in the literature on calibration of
post-peak behavior in bonded DEM models. Garvey & Ozbay [27] introduced an iterative
calibration method that uses an elitist-selection, genetic algorithm and an unconfined com-
pression test to discover a set microparameters that achieve a target stress-strain behavior.
For this research, the method was modified to utilize a two dimensional specimen assembled
using the pbrick method. Table 3.1 shows the resulting microparameters necessary to re-
produce the BPM material used in this study. Parameters not listed are set to the default
values, which can be found in the PFC manual.
3.2.2 Displacement Softening Model
The displacement softening model (DSM) in PFC2D is a constitutive model composed
of a bonded contact model without a parallel bond. Figure 3.3 shows the force versus
displacement curve for the DSM. The DSM behaves as described above in the elastic region.
When initial contact bond strength, Fn, is reached, the contact behavior begins to follow a
c
linear strength softening curve. If unloaded during softening, the bond can rebound along
the elastic path. When the user defined plastic displacement limit is reached, U , the
pmax
30
|
Colorado School of Mines
|
Figure 3.3: Displacement-softening constitutive model behavior
bond is inactive.
The contact yields in tension when the resultant contact force, Equation 3.5, is greater
than the contact strength, Equation 3.6. Here, the plastic displacement is in the direction
of the resultant force.
F = Fn2 +Fs2 (3.5)
2α 2α
F = 1 Fn + Fs (3.6)
cmax − π · c π · c
If the contact is in compression, failure can occur due to shear. The strength of the
contact, Equation 3.7, is dependent upon the coefficient of friction and the normal force,
and plastic displacement is in the shear direction.
F = µ Fn +Fs (3.7)
cmax | | c
During yield, the elastic displacment increments in the normal and shear directions are
a function of only the portion of resultant force up to the strength and the contact stffiness,
Equation 3.8, where k = n,s, and the plastic displacement increments are given by Equation
3.9.
Fk = Kk Uk (3.8)
e
Uk = Uk Uk (3.9)
p − e
32
|
Colorado School of Mines
|
The calibration of the DSM can be performed iteratively by the user in a short period
of time due to the intuitive response of macro behavior due to changes in microparame-
ters. Material generation is performed once. The resulting particle assembly is used to test
various combinations of microparameters. The UCS test is used to determine the stress-
strain behavior. First, desired elastic behavior is achieved by varying contact stiffness. Then
the plastic displacement limit is varied to change the post-peak softening behavior of the
specimen. By increasing U , the post-peak modulus decreases and the strength of the
pmax
specimen increases. To achieve an appropriate UCS and post-peak modulus, the tensile and
shear strengths of the contact are decreased. Some iteration is necessary to achieve the de-
sired behavior. The following section presents results for UCS tests on the calibrated BPM
and DSM.
3.3 Unconfined Compressive Strength Test (UCS)
The DEM material properties necessary for study of unstable failure in compression can
be found using a simulation of compressive strength tests. The UCS is used in this research
to calibrate the DEM by approximating a target set of characteristic material properties.
The target characteristic behavior is representative of an in situ western United States coal.
The UCS test specimen is a one meter wide by two meter high assembly of two blocks of
either DSM or BPM material. The specimen is loaded passed the point of failure until an
adequateassessmentofpost-peakbehaviorcanbedetermined. TheFishtanktemplate,direct
tension test with reversed platen displacement is used to perform this test. It is necessary to
use “grip particles” in order to compress the specimen due to the roughness of the specimen
ends. Table 3.3 shows UCS test template filenames and necessary inputs. For the DSM, all
contacts are assigned the displacement softening contact model with microproperties listed
in Table 3.2 after the specimen is restored. Stress in the sample is calculated by summing
particle forces in each grip respectively, dividing by the width of the specimen, and then
averaging the two grip stresses. Strain is calculated by determining the change in specimen
height using grip displacement. The test is terminated in the post-peak region when the
34
|
Colorado School of Mines
|
measured axial stress in the sample is lower than half of the UCS. After half of the peak
stress has been dissipated due to failure, a sufficient range of post-peak softening can be
observed in order to quantify the material post-peak modulus.
Table 3.3: UCS test parameters
Figure 3.4 shows the stress-strain curves from UCS testing on the BPM and DSM. Ta-
ble 3.4 lists the elastic properties and post-peak modulus of each model and for a target
material. The target material is set to approximate an in situ, western United States coal.
Both curves reflect a post-peak softening characteristic that is approximately equal in mag-
nitude to the pre-peak modulus. Each material is within an acceptable range from the
target material properties. While it is beyond the scope of this thesis, improvements in
the calibration of both the BPM and DSM warrant further investigation. This would in-
volve an in-depth look at the genetic algorithm used to calibrate the BPM or comprehensive
sensitivity analysis of DSM microparameters respectively.
Table 3.4: DEM and target characteristic material properties
35
|
Colorado School of Mines
|
Figure 3.4: UCS stress-strain curves for the BPM and DSM
3.4 Biaxial Compressive Strength Test (BCS)
Consider an axially loaded rock specimen shown in Figure 3.5 under a constant confine-
ment stress σ and deviatoric stress σ . According to the Coulomb failure criterion, failure
3 1
will occur along a plane oriented at an angle β. In this failure criterion, the strength is
linearly proportional to the normal stress on the failure plane. Equation 3.11 shows the
relationship between the shear strength, τ, the cohesion, c, the normal stress, σ , and the
n
internal friction angle, φ.
τ = c+σ tanφ (3.11)
n
Similar to what is observed in the laboratory, when confinement stresses are applied to
bonded discrete element models, strength of the material increases. Strength increase in
the presence of a confining stress is an emergent property of the DEM that is not directly
calibrated. By performing BCS tests on the specimens described above, the internal friction
angle of the DEM material can be calculated and compared to that of real rock.
36
|
Colorado School of Mines
|
Figure 3.5: Coulomb shear failure plane and stresses
A constant confining pressure is applied to the specimen using the so called spanning
chain algorithm included in the Fishtank. The spanning chain algorithm detects particles
that lie along a boundary and applies forces to the particles to simulate a constant boundary
pressure. The advantage of using this technique is that it allows for displacements on the
boundary preventing stress concentrations that would be caused by a rigid boundary such
as a wall. Custom functions were introduced in order to adapt the algorithm to the UCS
specimen. These custom functions are shown in Listing A.1. The test template used for
the UCS tests above was modified to include the spanning chain and pressure functionality.
This file is shown in Listing A.2. The test parameters listed in Table 3.3 are also used in this
set of tests. Figure 3.6 shows a screen shot of the specimen. The yellow marks are the disk
shaped elements, and the red circles attached by black lines make up the spanning chain
used to apply the confining stress.
BCS tests were performed using 1, 2, 4 and 6 MPa confining stress on both the BPM
and DSM DEMs. The tests were terminated post-failure when the axial stress decreased
to ninety percent of the strength. Figure 3.7 and Figure 3.8 show stress versus axial strain
curves for the BPM and DSM. The dotted lines are the horizontal stresses and the solid lines
are the axial stresses. The dotted lines show that the spanning chain confinement method
37
|
Colorado School of Mines
|
provided more consistent confinement stress during the BPM tests than in the DSM tests.
Horizontal stress increased approximately 1 MPa throughout each of the DSM tests and
Figure 3.7: BPM confined compression test stress versus strain curves
only approximately 0.1 MPa for the BPM tests. Peak axial stress, σ , and the prescribed
1
confinement stress, σ , were used to calculate the shear and normal stresses on the failure
3
plan in order to determine the friction angle.
Figure 3.9 shows a shear stress versus normal stress plot using each test result for both
discrete element models. The plot also shows the internal friction angle for each model. The
friction angle for the DSM is significantly higher than the BPM. The friction angle of real
coal is approximately 30 degrees. So, the BPM will simulate the coal as being weaker under
confined conditions than reality while the DSM will simulate the coal as being stronger under
confined conditions. Considering the increase in horizontal stress during the DSM tests, a
slightly lower friction angle can be attributed to the DSM material. By considering the
horizontal stress at failure rather than the prescribed confining stress as the true confining
stress the friction angle of the DSM material decreases slightly to 42 degrees. The effect
of confining stress on strength will determine the strength of the rock during compressive
39
|
Colorado School of Mines
|
Figure 3.8: DSM confined compression test stress versus strain curves
failurebutitwillnoteffectthefailuremodeoftherock. So, whileastudyonthemechanisms
underlying unstable failure will not be directly affected by the friction angle the BCS test
provides supplementary information on the accuracy of DEM simulation of rock behavior.
3.5 Elastic Platen Compression Test (EPC)
Byvaryingthestiffnessoftheloadingsystem, theDEMspecimencanbefailedinastable
or unstable mode of failure. A mechanical coupling method is used to fail the DEM models
under elastic platens simulated using the finite-difference, continuum code FLAC2D. Here,
first the coupling method will be explained and then results from the compression tests will
be presented and discussed.
The mechanical coupling of FLAC2D and PFC2D relies on the exchange of gridpoint
velocities and particle forces at the coupling boundary via a socket connection like that used
in TCP/IP transmission over the internet. The coupling boundary consists of a layer of
discs in the PFC model which overlaps the FLAC grid and the gridpoints associated with
the FLAC grid on that boundary. The particles on the boundary are called control particles
and the gridpoints that are on the FLAC coupling boundary are called control gridpoints.
40
|
Colorado School of Mines
|
Figure 3.9: Shear stress versus normal stress results from the confined compression tests
Figure 3.10 shows a diagram of the coupled model calculation cycle. The red arrows indicate
communication between FLAC2D and PFC2D. PFC2D uses Fish functions provided in the
Fishtank to update boundary conditions before every cycle. With these functions, PFC2D
uses control gridpoint velocities to calculate and then apply velocities to control particles.
Following a calculation cycle in PFC2D, updated forces on the control particles are used to
calculate and then send control gridpoint forces to FLAC2D. Following a calculation cycling
in FLAC2D, updated control gridpoint velocities are sent back to PFC and the coupled
model cycle repeats.
Each control particle is associated with a FLAC2D segment, which is defined by two
control gridpoints Figure 3.11 shows a diagram of two control gridpoints, 0 and 1, and one
control particle, P. In order to apply the velocity boundary condition to PFC2D, control
particle velocity is determined by linear interpolation of control gridpoint velocities. The
relationship between control particle velocity, v, and control gridpoint velocities, v and v
0 1
is shown using Equation 3.12
v(ξ) = v +ξ(v v ) (3.12)
0 1 0
−
where,
41
|
Colorado School of Mines
|
file, [Listing A.11]. PFC2D acts as the master program and controls coupling by issuing
commands to cycle the two codes one calculation step at a time. This is done in the main
driver file, along with loading default functions for control of the coupling boundary and
custom functions used to initialize the system, [Listing A.12 and Listing A.13]. Functions
used for measuring model state variables and recording data are also called from the driver
file, [Listing A.14].
Figure 3.14 shows the test geometry and boundary conditions for the coupled simulation.
The upper and lower platens are moved inward at the velocity used for calibration. It was
expected that the specimen would fail in a stable manner when the loading system modulus
washigherthanthanthespecimenpost-peakmodulusandunstablewhentheloadingsystem
modulus is lower than the specimen’s post-peak modulus. A series of tests for each DEM
were run in which only the platen elastic modulus was varied. A range of moduli were chosen
for each model so as to clearly depict the transition from stable to unstable failure as platen
moduli decreases.
Figure 3.14: Mechanically coupled compression geometry and boundary conditions
45
|
Colorado School of Mines
|
Figure 3.15 and Figure 3.16 show stress strain curves for the EPC simulations using the
BPM and DSM respectively. BPM simulations were conducted using ten different platen
moduli: 0.5, 1, 2, 3, 5, 10, 15, 20, 35, and 50 GPa. DSM simulations were conducted using
seven different moduli: 1, 1.5, 2.5, 5, 10, 20, and 35 GPa. In both plots, the unconfined
compressive strength stress-strain curve is included to provide a reference to the calibrated
characteristic material behavior. Both plots show that for tests with stiff platens as com-
pared to the material post-peak modulus the post-peak behavior follows the slope of the
characteristic material curve from the UCS test. Tests with soft platens show a deviation in
specimen post-peak behavior from the characteristic material post-peak behavior.
Figure 3.15: BPM coupled simulation stress-strain curves
A determination of failure stability is possible by comparing the assigned loading system
modulus and the specimen modulus determined from observed post-peak behavior. The
specimenpost-peakmodulusisdeterminedbymeasuringthepost-peakslopeofthespecimen
stress-strain curve. During stable failure, the specimen fails according to its characteristic
material properties. So, the post-peak modulus equals the calibrated characteristic post-
peak modulus, E . During unstable failure, all load bearing capacity of the specimen is
pp
lost. Without the resistance of the specimen, the platens rebound according to the elastic
46
|
Colorado School of Mines
|
Figure 3.16: DSM coupled simulation stress-strain curves
properties. So,itappearsthatthespecimenpost-peakmoduluschangestoequalthemodulus
of the loading system.
Table 3.5 and Table 3.6 show the specimen post-peak modulus measured from each test
and the strength of each specimen for the BPM and DSM respectively. The characteristic
material behavior is also included, labeled as “UCS Grip”. The shaded area indicates tests
that resulted in unstable failures. Figure 3.17 shows a scatter plot of E and E from
pp plat
each EPC test, the measured specimen post-peak modulus and the assigned loading system
modulus respectively. The vertical lines are E from the “Grip UCS” tests, which are the
pp
calibrated specimen post-peak moduli for the BPM and DSM.
Table3.5andTable3.6shownumericallythatwhenE becomeslargerthanthecharac-
plat
teristicE theEPCE beginstodeviatefromE . ThisisshowngraphicallyinFigure3.17
pp pp plat
as a asymptotic trend to the right of the vertical lines marking characteristic E . Ideally,
pp
each model should transition immediately to stable failure as loading system modulus in-
creases beyond this value. The trend in the DSM post-peak modulus values indicate that a
fairly sharp transition from unstable to stable failure occurs. This is reflected in the consis-
tency of values in Table 3.6 that are not shaded. On the other hand, the post-peak behavior
47
|
Colorado School of Mines
|
Figure 3.17: Loading system and specimen post-peak moduli in EPC tests
of stable BPM failure has a range of values. This is shown in Figure 3.17 as a slow transition
toward stability in terms of EPC E and inconsistent values for high moduli tests. The
pp
stability transition behavior for the BPM indicates that other factors are in effect. This so
called quasi-stable behavior in the BPM tests could be caused by micro-mechanical behavior
that is not visible in this type of analysis. For example variation in failure progression could
lead to a change in the post-peak characteristic during the process of the failure, leading to
a E different from the characteristic E or E . This effect is not noticeable in the DSM
pp pp plat
results. Additionally, the DSM has consistent E for each stable test while the BPM E for
pp pp
the high modulus tests (more likely to be stable than the quasi-stable moduli tests) varies
slightly and remains below characteristic E .
pp
In the EPC tests, the effect of loading system stiffness on the mode of failure of two
DEMs was investigated by changing the modulus of elasticity of the loading platens between
different tests. The stability transition behavior of the two DEM models is important to
the current research in so far as we are able to reliably detect unstable failure. According
49
|
Colorado School of Mines
|
to the stability transition behavior presented above, in a situation where the loading system
stiffness is similar to the characteristic material post-peak stiffness, the DSM more clearly
distinguishes between stable and unstable failure. The BPM behaves in a quasi stable
manner when the loading system stiffness is similar to the characteristic material post-peak
stiffness. The quasi-stable behavior is more difficult to discern as stable or unstable because
the post-peak behavior is equal to neither the characteristic post-peak behavior nor the
loading system stiffness.
A dffierence arrises between the DSM and BPM model behavior during the EPC test
when examining the strength. Figure 3.16 show a change in specimen strength for the
three softest DSM EPC tests while in the strength in the BPM EPC tests remains fairly
consistent. Table 3.5 and Table 3.6 show the strength of each specimen for each test. These
results indicate that the strength of the DSM material decreases when subjected to unstable
loading conditions. The BPM model does not experience this change. A possible reason
for this reduction in strength could be the sudden onset of localized, unstable crack growth.
Contact bond failure associated with material yielding could be prematurely accelerated
if a large amount of stored strain energy is available. In the case of soft, elastic loading
systems, this is possible. Additional work would be needed to confirm this hypothesis. Local
measurements of material stiffness could provided evidence for this micromechanical process.
Regardless, when using this DEM, a reduction in strength of the DSM material should be
expected when loading system stiffness is less than material post-peak stiffness.
3.6 Loading Rate Compression Test (LRC)
In the final test of DSM and BPM behavior in compression, the UCS test is revisited
with different loading rates, referred to as LRC. Four loading rates are chosen, including
the loading rate used for the previous tests. As in the UCS test, grip particles are moved
inward to load the specimen. Vertical stress and strain measurements are taken using the
grip particles and the stress-strain curves for each test are compared.
50
|
Colorado School of Mines
|
Figure 3.18 shows the stress strain curves for four LRC tests of the BPM. Figure 3.19
shows the stress-strain curves for the same tests using the DSM. ?? and Table 3.8 show
the loading velocity (v), the post-peak modulus (E ) and the strength (σ ) for the BPM
pp c
and DSM tests. Due to the shape of the DSM curves in the post-peak region, some liberty
had to be taken to choose a representative section of each curve in Figure 3.19 to describe
the post-peak behavior as linear. The straight portions of the curves ranging from 50% to
approximately 80% of peak strength were used to calculate E .
pp
Table 3.7: LRC BPM loading velocity, post-peak modulus and strength values
Table 3.8: LRC DSM loading velocity, post-peak modulus and strength values
Figure 3.18 and Figure 3.19 show that there is no noticeable effect of loading rate on
the elastic region of the stress-strain curve. As loading rate increases, the strength of both
DEMsincreases. TheeffectismoreprominentintheDSMmodelthanintheBPMmodel. In
the post-peak region, a change in material behavior emerges as the loading rate is changed.
Figure 3.18 and Table 3.5 show that the E of the BPM increases as the loading rate is
pp
decreased. Thechangeinpost-peakstiffnessissodrastic,thatthematerialentirelyloosesthe
calibrated post-peak softening characteristic. Figure 3.19 shows that the change in loading
rate has an effect on DSM post-peak stiffness. As loading rate decreases the stress-strain
curve reveals abrupt changes in vertical stress as the material fails. Despite changes to the
51
|
Colorado School of Mines
|
BCS tests showed a stark difference in the effect of confinement on strength between the
two models. The DSM has an internal friction angle that is higher than the approximate
internal friction angle of coal and the BPM’s internal friction angle is too low. The low
internal friction angle of the BPM is a known shortcoming of the model [68]. The higher
than desired internal friction angle of the DSM is an unexpected result.
EPC test results showed that failure stability in the DEMs could be determined by
comparing loading system and material post-peak stiffnesses. Using this comparison as an
indicator of unstable failure, when platen modulus increased beyond the material post-peak
stiffness the DSM transitioned more quickly than the BPM from unstable to stable failure.
LRC tests revealed changes in post-peak behavior in both models. In the BPM, as loading
rate decreased the post-peak stiffness increased drastically. While in the DSM, the slowest
loading rate resulted in abrupt changes in stress during failure, but overall the material
retained its post-peak softening characteristic.
The results from the EPC and the LRC tests provide important model behavior charac-
teristics that suggest that the DSM is more appropriate for the studies of unstable failure
in underground coal mining. The DSM’s sudden transition from unstable to stable failure
seen in Figure 3.17 indicate that the expression of unstable failure is more ambiguous in
models using the BPM. It was important in this chapter to show that the chosen DEMs
could satisfactorily simulate unstable failure, but in work in later chapters it will be shown
that detecting instances of unstable failure in larger models is crucial to studying the effects
of local mine conditions on instability.
Inalaterchapter, anundergroundminemodelwillbeintroducedthatusesinsitustresses
to load the PFC2D coal. Rather than applying a consistent loading velocity, gradual mining
steps will redistribute stresses in the model resulting in increased load on material near the
excavation. The velocity by which the model applies the load is not controlled by the user
and will vary from the velocity used to calibrate the PFC2D material. Figure 3.18 showed
drastic changes in the post-peak softening characteristic of the BPM material as loading
54
|
Colorado School of Mines
|
CHAPTER 4
INDICATORS OF UNSTABLE COMPRESSIVE FAILURE IN DEM COAL STRENGTH
TESTS
This chapter is concerned with characterizing the expression of unstable compressive fail-
ure in the displacement softening model (DSM). Various measurements of DEM behaviors
can be used to indicate whether failure is unstable or stable and give a measure of the degree
of failure instability. These calculated values are called stability indicators. Nine stability
indicators are explained and employed here. They are damping work, maximum instanta-
neous kinetic energy, cumulative kinetic energy, maximum instantaneous mean unbalanced
force, cumulative mean unbalanced force, maximum instantaneous maximum unbalanced
force, cumulative maximum unbalanced force, contact softening, and the number of broken
contacts.
The nine indicators are first applied to a simulation of a laboratory test, the elastic
platen strength (EPC) test from the previous chapter. Since both stable and unstable failure
occurred within the series of EPC tests, the behavior of each indicator during unstable and
stable failure is observed. The indicators are then compared to one another to determine
suitability for tracking unstable failure.
It is useful to apply the stability indicators to DSM models of various sizes so that
in-situ geometries can be investigated. Therefore, the nine stability indicators are applied
to a series of slender pillar compressive strength (SPCS) tests. During these tests, pillars
of various sizes are failed by loading systems with different stiffness to encourage stable
and unstable failures. Failure stability is determined using a comparison between loading
system stiffness and post-peak behavior similar to that used on the EPC tests. The effect of
model size on stability indicator performance is observed and the indicators are once again
compared for suitability in tracking unstable failure.
56
|
Colorado School of Mines
|
Due to the change of model size it is also beneficial to observe the spatial distribution of
damage and damage intensity in the pillar. Additional analysis is performed on the SPCS
tests to observe the spatial distribution of contact softening and the damping work due to
failure. A grid based measurement technique used to track the two indicators is explained
and then the correlations between model damage and failure stability are discussed.
4.1 Description and Calculation of Stability Indicators in DEM Compressive
Failure
Each of the stability indicators are calculated using PFC2D particle and contact state
information. This section provides details on how each indicator is calculated and references
custom FISH functions that facilitate the calculations.
4.1.1 Damping Work
The PFC2D model uses a damping mechanism to dissipate kinetic energy, so that a
steady state solution may be arrived at within a reasonable number of calculation steps.
The damping mechanism applies force to particles undergoing acceleration in the direction
opposite that of the particle’s motion. Equation 4.1 shows the damping force applied to each
particle:
→−F = α →−F (sign v ) vˆ (4.1)
d unbal →−
·− ·
where →−F
d
is the damping force, α is an dimensionless coefficient, →−F
unbal
is the unbalanced
force on the particle, and v is the particle velocity. The coefficient, α, is used to define the
→−
level of damping. A value of 0.7 is used in all of the simulations in this thesis. This is the
value recommended by the authors of PFC2D for quasi-static conditions.
During failure, the damping mechanism applies larger forces to the model in order to
stabilize the failure process. Over a calculation timestep, dt, the damping forces perform a
quantifiable amount of work that can be summed over the entire model. Damping work is
summed over each degree of freedom, i, over all particles, N, from timestep t to t . The
i f
work is summed over the interval of failure. Equation 4.2 is the work done for translational
57
|
Colorado School of Mines
|
motion where dx is the incremental translational displacement and Equation 4.3 is work
→−i
done for rotational motion where −M→ is the damping moment and d r is the incremental
d →−
rotation. Translational damping work and rotational damping work are summed to obtain
the total damping work. The functions responsible for calculating the total damping work
are given Listing C.15. The function called param loop bp loops through all of the particles
in the assembly and pfc wd calculates the damping work on each particle.
tf N 2
W = →−F dx (4.2)
dtrans
t=ti n p=1 i=1
di
·|
→−i
| np,t
tf N
W = −M→ d r (4.3)
drot
t=ti n p=1
d
·|
→−
| np,t
4.1.2 Maximum Instantaneous Kinetic Energy
During the simulation, the kinetic energy of the model is calculated by summing the
rotational and translational kinetic energies of all the particles for a single timestep. Equa-
tion 4.4 and Equation 4.5 are the equations for rotational and translational kinetic energy
respectively, and the total kinetic energy is given by Equation 4.6:
1
KE = Iω2 (4.4)
rot
2
2
1
KE = mv2 (4.5)
trans 2 i
i=1
KE = KE +KE (4.6)
rot trans
where, I = 1/2mr2, and ω is the rotational velocity. KE is calculated every step as an
instantaneous value. The failure stability and intensity of failure should be reflected in the
velocity of particles. Therefore, the maximum value of instantaneous kinetic energy during
failure is used as a stability indicator because it provides information on the velocity of
particles.
58
|
Colorado School of Mines
|
4.1.3 Cumulative Kinetic Energy
The cumulative kinetic energy, KE , can be determined from the record of instantaneous
c
kinetic energy by summing the instantaneous kinetic energy over the time interval of failure,
from timestep t to t as in the case of damping work, Equation 4.7. The kinetic energy
i f
during the entire duration of failure reflects the stability and intensity of the failure in its
entirety in terms of particle velocity.
tf
KE = KE (4.7)
c t
t=ti
4.1.4 Maximum Instantaneous Mean Unbalanced Force
The instantaneous mean unbalanced force, F , is the average of the absolute values of
µ
the out of balance force components for each particle and is calculated using Equation 4.8.
The maximum instantaneous mean unbalanced force is the largest value of mean unbalanced
force of all the timesteps during the failure interval. The mean unbalanced force provides a
measure of the level of instability because unbalanced forces are lowest when the model is
near static equilibrium. By taking the mean of all unbalanced forces the effect of outlying
values is minimized.
F
F = (4.8)
µ
N
N 3
F = ( |F unbali|)
np
n p=1 i=1
4.1.5 Cumulative Mean Unbalanced Force
The cumulative mean unbalanced force is calculated in the same way as the cumulative
unbalanced force. Equation 4.9 is used to calculate the cumulative mean unbalanced force.
tf
F = F (4.9)
µc µt
t=ti
59
|
Colorado School of Mines
|
4.1.6 Maximum Instantaneous Maximum Unbalanced Force
The maximum unbalanced force is the unbalanced force of greatest magnitude during a
timestep. This value is determined by a PFC2D intrinsic FISH function instantaneously and
during each step. The maximum instantaneous maximum unbalanced force gives a measure
of the intensity of failure using the element furthest from static equilibrium.
F = max ( F ) : i = 1,2,3 & n = 1,...,N (4.10)
max | unbali| np p p
4.1.7 Cumulative Maximum Unbalanced Force
The cumulative maximum unbalanced force is determined similarly to the cumulative
mean unbalanced force. This indicator provides a measure of failure intensity by finding
the degree of freedom with the largest amount of applied force each step. The maximum
unbalanced force should be larger the more unstable the failure is.
tf
F = F (4.11)
maxc maxt
t=ti
4.1.8 Contact Softening
The DSM is a softening contact model, as explained in detail in the previous chapter.
The contact begins to soften once the initial strength of the contact is reached (Figure 3.3).
The contact bond is inactive once the softening limit is reached. Before the plastic displace-
ment limit is reached the amount of softening, U , can be observed using an intrinsic FISH
p
command to access a contact state variable called the contact softening ratio, U . The
rat
contact softening ratio is the amount of contact softening divided by the plastic displacement
limit, Equation 4.12.
60
|
Colorado School of Mines
|
U = U /U (4.12)
rat p pmax
The value of U becomes unity at maximum softening. By summing U over all of the
rat rat
contacts in the model, a measure is made of the contact softening due to compressive failure.
The sum of softening ratios is calculated incrementally. The functions responsible for cal-
culating the sum of softening ratios are located in Listing C.15, where param loop cp loops
through all of the contacts in the PFC2D assembly and pfc sof retrieves contact state in-
formation. The amount of contact softening, the indicator used in the following analysis, is
determined by multiplying the softening ratio sum by the plastic displacement limit. This
yields a value for contact softening in units of meters. Equation 4.13 is the amount of contact
softening, where C is the total number of contacts.
C
U = U U (4.13)
rat pmax
·
n c=1
4.1.9 Number of Broken Contacts
In a DSM contact, the contact is deemed broken once the plastic displacement limit,
U , is achieved. One way of assessing damage in the DEM assembly is by tracking the
pmax
number of contacts that have broken. The number of broken contacts is determined using
the variable sof numbroke shown in Listing C.15.
4.2 Stability Indicator Results in EPC Tests
Each of the indicators explained above are used here to describe the failure in elastic
platen strength (EPC) tests presented in the previous chapter. The trends of indicators are
shown by means of scatter plots of the indicator values versus time step. Values of indicators
are determined from line plots of the indicator, so each point on the scatter plots represent
indicator magnitude for an individual test.
Figure 4.1 shows the instantaneous and cumulative kinetic energy during the EPC test
with 5 GPa platens. During the loading phase of the test, the cumulative kinetic energy
61
|
Colorado School of Mines
|
Figure 4.2: Accumulated damping work during the failure interval in EPC tests
small increase with 2.5 GPa platens the damping work begins a sharp increase for the two
softest platens.
Figure 4.3 shows the maximum instantaneous kinetic energy during failure on a semi-log
plot. While the trend seen in damping work of increasing indicator with decreasing stability
of failure is present, the kinetic energy shows an even more drastic increase for the most
unstable cases. The three highest platen moduli exhibit consistent maximum instantaneous
kinetic energy, suggesting that this indicator may be particularly useful in identifying stable
failure.
The maximum instantaneous mean unbalanced force is shown in Figure 4.4 on a semi-log
plot. The results for this indicator are similar to the maximum instantaneous kinetic energy
in that the most unstable failure has a significantly higher value than the next softest test,
and the values for the most stable tests are very consistent. The maximum instantaneous
maximum unbalanced force is shown in Figure 4.5. The trend in maximum instantaneous
maximum unbalanced force is similar to kinetic energy and mean unbalanced force although
there exists some irregularity in the value for the stable failures.
63
|
Colorado School of Mines
|
Figure 4.5: Maximum instantaneous maximum unbalanced force in EPC tests
Like damping work, cumulative values for kinetic energy, mean unbalanced force, and
maximum unbalanced force were normalized with respect to the stress drop during failure.
Figure 4.6 shows the cumulative kinetic energy. The cumulative kinetic energy is consistent
for stable failures and increases as failure stability decreases.
In Figure 4.7, the cumulative mean unbalanced force generally shows the expected trends
for stable versus unstable failure. Although, outlying results for the unstable failures using
2.5 and 5 GPa platens indicate that variability in mean unbalanced force in unstable failures
can occur and caution should be exercised when using this indicator.
Figure 4.8 shows the cumulative maximum unbalanced force. The cumulative maximum
unbalanced force is fairly consistent for all tests with the exception of the most unstable
failure. Therefore, it does not reliably distinguish between stable and unstable failures.
ContactsofteningduringthefailureintervalforeachEPCtestisshowninFigure4.9. The
amount of softening is normalized with respect to the stress drop during the failure interval
for each test respectively. The amount of contact softening remains consistent for the most
stable failures. For unstable failures the amount of contact softening exhibits no particular
trend as the two most unstable failure result in the most extreme cases of softening.
65
|
Colorado School of Mines
|
Figure 4.8: Cumulative maximum unbalanced force in EPC tests
The number of broken contacts for each tests is shown in Figure 4.10. The number
of broken contacts is also normalized against the stress drop in each test. The number of
broken contacts also suggests that stable failures have lower, consistent values while the as
the failure becomes more unstable the value increases. Although, the number of broken
contacts increases slightly as the elastic modulus increases.
4.2.1 EPC Indicator Results Discussion
With the exception of contact softening, each of the indicators utilized for the analysis
of failure stability in EPC tests exhibit similar trends for the EPC tests. Each of the
indicatorsshowshighvaluesforunstablefailuresanddecreaseasstabilityoffailureincreases.
Although, the number of broken contacts exhibits an increase with increasing platen elastic
modulus. The damping work, maximum instantaneous mean unbalanced force, maximum
instantaneous kinetic energy and cumulative kinetic energy appear to be suitable indicators
for tracking unstable failures. For each of these indicators, consistent values are measured for
stable failures and as failure stability decreased, the indicator likewise increased to provide
a qualitative measure of failure stability.
67
|
Colorado School of Mines
|
Trends in cumulative values for kinetic energy, mean unbalanced force and maximum
unbalanced force are helpful in describing failures in that the cumulative value contains
information for the duration of the failure rather than a single calculation step. Each of
these cumulative values performed with various levels of success in distinguishing between
stable and unstable failures. The cumulative kinetic energy performs well in distinguishing
between stable and unstable failure while the cumulative maximum unbalanced force looses
the expression of instability for all unstable failures except for the 1 GPa test. The mean
unbalanced force should be used with caution as there is some variability in the values of
unstable failures. Although, additional work into methods of normalization may reveal the
expected trend.
Contact softening does not increase with decreasing failure stability but the variability
in the value increases. Additional analysis could possibly reveal a trend similar to the other
indicators, but is not pursued further in this study. Rather, since contact softening is a result
of failure in contact bonds, it can be used to identify the locations and extent of damage in
the model. This application will be applied in the following pillar tests.
4.3 Slender Pillar Compressive Strength (SPCS) Test Description
In underground mining, both the material properties and the dimensions of the mine
affect the loading system stiffness and consequently the failure mode. In this section, a
series of slender coal pillars are failed under an elastic loading system. The stiffness of the
loading system is varied by changing the modulus of elasticity of the loading system and
also the size of the pillar. A total of nine tests are conducted, failing three pillar sizes under
three separate loading systems of various elastic moduli. The pillar height is kept constant
and the width is changed to produce three different sized pillars. The pillars are described
by the ratio of pillar width to pillar height. Pillars are constructed of width to height ratios
one, two, and three. Each pillar size is failed with a 5 GPa, 20 GPa, and 35 GPa loading
system. Failure stability is determined by comparing the loading system stiffness to the
pillar post-peak stiffness. Then, the performance of the nine stability indicators is assessed
69
|
Colorado School of Mines
|
for stable and unstable pillar failures.
4.3.1 SPCS Geometry and Boundary Conditions
Figure 4.11 shows a schematic depicting the geometry and boundary conditions of the
slenderpillartests. TheFLAC2Dgridiscomprisedofafineinnergridandacoarseoutergrid
in order to capture the forces and stresses at the resolution of the PFC2D model and to save
memory. The grid input file for the width to height ratio one pillar is shown in Listing C.16.
The grid is expanded for the larger pillar tests in the horizontal direction by adding a
proportional amount of elements. The same FISH functions are used to facilitate coupling
as with the EPC tests, only the coupling boundary segment list as seen in cpf EPC.fis must
be changed.
The model has a symmetric boundary condition along the vertical edges, simulating an
infinite series of identical pillars. The width of the excavation is kept constant for each pillar
size, andthewidthofthemodelischangedinaccordanceonlytothepillarwidth. Themodel
is first equilibrated with the entire FLAC2D nulled region filled with PFC2D elements. Then
the entries are ‘excavated’ by deleting the elements within three meters of the left and right
boundaries of the model. After a subsequent equilibration stage, the coal pillar, modeled
in PFC2D, is loaded under an increasing compressive load by a constant velocity boundary
condition applied to the upper and lower most boundaries of the model.
4.3.2 Local Mine Stiffness Calculation
Each pillar failure is determined to be stable or unstable based on a comparison of
the local mine stiffness during failure and the post-peak pillar stiffness. Following from
the laboratory tests above, if the pillar post-peak stiffness is equal to the unloading local
mine stiffness then the failure is considered unstable. The stiffness of the loading system
is measured using only the tributary area above and below the pillar width. From these
calculations, a local mine stiffness measurement is made for each test by assessing average
pillar vertical reaction force on the surrounding mine and average pillar-mine boundary
70
|
Colorado School of Mines
|
Figure 4.11: Slender pillar test geometry and boundary conditions
vertical displacement. The stiffness of the roof and floor can be determined individually
using the the equation for force on a spring, Equation 4.14.
k =∆ F /∆D (4.14)
P
∆F is the change in force exerted on the roof or floor by the pillar and ∆D is the change
P
in displacement in the roof and floor respectively. ∆D is defined as the compression of the
tributary area averaged along the width of the pillar. The pillar reaction force is calculated
using average pillar stress and the cross sectional area of the pillar (P x 1m).
W
Figure 4.12 is a conceptual illustration of typical pillar behavior trends versus calculation
step. The average pillar stress and the average loading system displacement exhibit similar
trends, therefore they are both illustrated as the narrow width line. The bold line represents
average pillar strain. The step interval, dT, denotes the time of failure and is defined as
beginning at the onset of pillar softening through the occurrence of residual stress. The local
stiffness is calculated using ∆F and ∆D during the interval dT. Then, by considering the
P
71
|
Colorado School of Mines
|
for observing damage is necessary. Also, observing damping work on a localized basis could
indicate whether the damage is due to a stable or unstable failure and give a measure of
the intensity of failure. A grid based measurement technique is implemented to observe the
behavior of the contact softening and damping work on a local level.
To track contact softening and damping work spatially in the model, a fictitious grid
that is comprised of square pixels is superimposed onto the PFC2D assembly. The square
pixels are 0.1 m on a side and grid resolution is kept constant in each model as model size
changes. Each particle and contact is permanently assigned to a pixel at the beginning of
the simulation, thereby ignoring effects of pixel-to-pixel movement. Irregular values at the
model’s boundaries, due to empty space in the pixels, have been found to be irrelevant to
model behavior and can also been ignored.
Listing C.15 shows the FISH code used to execute the grid based measurement technique.
The functions included in this algorithm compute both grid based values and totals for
damping work and contact softening. Figure 4.13 is a flow chart depicting the process of
calculation. The algorithm can be described as having two parts, the initialization and the
calculation cycle. The initialization defines the necessary functions, grid, and memory arrays
for data processing and histories. The grid based calculation is executed at the beginning
of every PFC2D calculation step. During the grid based calculation, the function loops
through each particle and then each contact in the DEM in order to calculate values of
desired parameters. Then the data array is updated and the data histories are recorded.
Although values of the indicators are accumulating every step, histories are only recorded
once every 5000 calculation steps in order to reduce memory usage.
4.4 SPCS Test Results
The results of SPCS test results are presented in the following section using line plots
to show the stress versus strain behavior of each pillar and the loading system displacement
during each test. Stability indicator results are presented and then analyzed in the context of
whetherfailureofthepillarisstableorunstable. Thenthegridbasedindicatormeasurements
73
|
Colorado School of Mines
|
are presented and discussed.
4.4.1 Pillar Stress-Strain Behavior and Loading System Displacement
The stress-strain results from the nine pillar tests are organized into three plots. Each
plot contains three tests, showing stress-strain behavior of one width to height ratio tested
with 5, 20 and 35 GPa elastic modulus loading systems. Figure 4.14 shows stress-strain
behavior for the width to height ratio one pillar, Figure 4.15 shows stress-strain behavior
for the width to height ratio two pillar, and Figure 4.16 shows stress-strain behavior for the
width to height ratio three pillar.
Figure 4.14: Stress-strain curves for width to height ratio one pillar tests
Eachcurveshowshowpillarstressincreasesduringtheloadingphaseofthetestsandthen
as the pillar fails, stress is dissipated. Pillar strength is dependent upon the pillar size and
the loading system stiffness. As pillar size increases the strength of the pillar increases, and
as loading system stiffness decreases the pillar strength decreases. In the post-peak region,
the post-peak modulus is dependent upon both the pillar size and loading system stiffness.
As pillar size increases, the post-peak modulus decreases and as loading system stiffness
increases the post-peak modulus increases. A significant change in post-peak behavior is
75
|
Colorado School of Mines
|
apparent in each of the 5 GPa tests where the change in post-peak modulus is more gradual
from 35 GPa to 20 GPa loading system elastic modulus.
The elastic displacement of the loading system in each of the nine tests are shown in
plots organized in the same way as the stress-strain plots. Figure 4.17 shows average loading
system displacement for the width to height ratio one pillar tests, Figure 4.18 shows average
loading system displacement for the width to height ratio two pillar tests, and Figure 4.19
shows average loading system displacement for the width to height ratio three pillar tests.
Loading system displacement increases during the loading phase of the tests and then de-
creases as the pillar fails. Displacement at the point of failure is higher when elastic modulus
of the loading system is low and increases as the pillar size increases. In the post peak region,
the 5 GPa tests show a fast decrease in loading system displacement, while tests with 20 and
35 GPa loading system exhibit a more gradual decrease in loading system displacement.
Figure 4.17: Loading system displacements for width to height ratio one pillar tests
Using data from the stress-strain and displacement plots, stability of the pillar failure
can be assessed. Similar to the EPC tests, sudden rebound of the loading system indicates
unstable failure. A sudden rebound of the loading system can be detected by comparing the
measured post-peak stiffness of the pillar to the loading system stiffness during failure. If
77
|
Colorado School of Mines
|
these two values are similar, unstable failure is assumed to have occurred.
Figure 4.20 shows measurements of loading system stiffness and pillar post-peak stiffness
during the failure interval for each test. The data are color coded according to loading
system elastic modulus. Calculated loading system stiffness is represented by exes and pillar
post-peak stiffness is represented by triangles. Generally, as loading system elastic modulus
increases the pillar post-peak stiffness and loading system stiffness increases. Stability of
the failure is determined by comparing the loading stiffness and post-peak stiffness for each
test. The 20 GPa and 35 GPa tests show consistent difference between pillar behavior and
loading system stiffness measurements indicating stable pillar failure for all six tests. The 5
GPa tests show coincident values, indicating unstable failure for all three pillar sizes. Based
on these results, further analysis of indicators will assume stable failure for the 20 and 35
GPa tests and unstable failure for the 5 GPa tests.
Figure 4.20: Pillar post-peak stiffness and loading system stiffness measurements
4.4.2 SPCS Test Indicator Results
Results are presented here for each of the nine indicators during the nine SPCS tests.
Figure 4.21 shows cumulative and instantaneous mean unbalanced force versus calculation
79
|
Colorado School of Mines
|
Figure 4.22 shows damping work for the pillar strength tests. When unstable failure
occurs the damping work is markedly increased, while for both the stable failures of each
width to height ratio, the damping work is similar. Despite the normalization with respect
to both stress drop and pillar size, the damping work is higher for larger pillars. As with the
EPC tests, a larger amount of damping work suggests that the failures are more unstable or
in other words, more violent.
Figure 4.22: Damping work in pillar strength tests
Figure 4.23 shows the maximum instantaneous kinetic energy model during failure. Fig-
ure 4.24 and Figure 4.25 show the maximum instantaneous mean and maximum unbalanced
forces in the pillar strength tests, respectively. For each of these indicators the unstable
failures generally have higher values. Although the difference between stable and unstable
cases is not as pronounced as with the damping work. The maximum instantaneous max-
imum unbalanced force for the 20 GPa width to height ratio two tests is an outlier in this
trend. The maximum mean unbalanced force decreases for larger pillars. This likely is the
result of averaging over a larger number of particles. Each of these indicators only contains
model information for one calculation step, and therefore should be used with caution and
in conjunction with other indicators.
81
|
Colorado School of Mines
|
Figure 4.25: Maximum instantaneous maximum unbalanced force in SPCS tests
Cumulative values for kinetic energy, mean unbalanced force and maximum unbalanced
force are presented in Figure 4.26 through Figure 4.28. Cumulative values may express the
failurestabilityofthemodelbetterbecauseinformationiscontainedfromtheentireduration
of failure. The cumulative kinetic energy describes the total amount of energy translated
into motion that was initially stored in the specimen and loading system as strain energy.
Figure 4.26 shows that the kinetic energy increases drastically for unstable failures while val-
ues for stable failures are grouped at a noticeably lower magnitude. The cumulative mean
unbalanced force in Figure 4.27 shows a similar behavior only the stable values are grouped
more closely. However, as model size increases the number of elements over which the un-
balanced force is averaged increases. Since unbalanced force is higher in the areas of damage
many elements have low unbalanced force, therefore the mean unbalanced force decreases as
model size increases. The values for stable failures also decrease slightly as the model size
increases. The cumulative maximum unbalanced force in Figure 4.28 exhibits similar behav-
ior, but values for stable cases increase for larger pillars. This trend suggests that for larger
assemblies the maximum unbalanced force may not be able to clearly distinguish stable and
unstable failure, but additional testing of larger pillars would need to be performed to verify
83
|
Colorado School of Mines
|
this.
Figure 4.26: Cumulative kinetic energy in SPCS tests
Contact softening is shown in Figure 4.29. The cumulative amount of contact softening
describes the plastic displacement of the model on the contact level. Contact softening
distinguishesbetweenstableandunstablefailuresinthatstablefailuresforsimilargeometries
exhibit similar amounts of contact softening while the unstable failures display a larger
amount of contact softening. Although, as pillar size increases for unstable failures the
amount of contact softening ceases to increase. Once again, additional tests on larger pillars
may reveal additional features to the trend.
The number of broken contacts normalized with respect to stress drop and pillar size
is shown in Figure 4.30. The number of broken contacts is the number of contacts that
have reached the softening limit. Broken contacts are typically thought of as cracks in DEM
models, but due to the softening component this definition is debatable. Regardless of the
definition of crack location, the location of a broken contact identifies a location of significant
damage in the model. The normalized number of broken contacts is consistently higher for
unstable failures and there exists a large gap between closely grouped stable failures and the
unstable failures. As with mean unbalanced force and softening indicators, the number of
84
|
Colorado School of Mines
|
Figure 4.29: Contact softening in pillar strength tests
broken contacts levels off as pillar size increases for unstable failures.
4.4.3 Grid Based Instability Indicator Results
The damping work and contact softening were measured using the grid based measure-
menttechniqueshowninFigure4.13. Figure4.31throughFigure4.34showcontactsoftening
and damping work for each of the pillar strength tests. Each image is produced from the
data at the last step of the failure interval used for the previous indicator analysis. A shaded
bar is provided with each image to indicate the range of values present. The image value
range is determined by setting the maximum value to the maximum value detected in the
grid. This way a comparison between tests can be made using both the local maximum
magnitude of the indicator and the pattern of the indicator.
Figure 4.31, Figure 4.32, and Figure 4.33 each show that the maximum value of contact
softening increases as the elastic modulus of the loading system decreases. Also, maximum
contact softening increases as pillar size increases. A maximum of 1.4 meters in the unstable
width to height ratio three pillar is measured. While cumulative values of contact softening
show a distinguishing difference in magnitude between stable and unstable failures, the local
86
|
Colorado School of Mines
|
Figure 4.30: Broken contacts in pillar strength tests
measurements do not express the same trend. Rather, local maximum contact softening is
higher in unstable failures than stable failures but not by a large enough degree to use with
confidence to distinguish stable and unstable failures.
The contact softening in each tests shows that damage in the model occurs in planes
that resemble shear planes. Both stable failures and unstable failures damage similarly in
so far as planes of damage form in similar locations for similarly sized pillars. Although,
concentrations of contact softening are noticeable in the damaged areas of the unstable
failures. This can better be seen by comparing the 35 GPa tests, which are the most stable,
to the 5 GPa tests, which is an unstable failure. Localization of failure along a plane would
contribute to higher values for individual grid pixels and could explain the trend of higher
local contact softening for unstable failures.
The maximum local damping work follows the trend previously demonstrated by cumu-
lative damping work in Figure 4.22. The maximum local damping work for the unstable
failures is noticeably higher compared to the stable failures. As the loading system elastic
modulus increases and the failures become more stable, the damping work decreases further.
Also, for stable failures the damping work is more distributed throughout the model. The
87
|
Colorado School of Mines
|
Figure 4.35: Damping work during unstable failure of width to height ratio three pillar with
decreased value range (kJ)
values for stable failures and then increases with the degree of instability. The ideal indicator
can perform both functions, to identify instability and to quantify the degree of instability.
According to the results in this chapter, cumulative values more completely describe the
failure and are more reliable than the maximum instantaneous values because of less vari-
ability in trends. The cumulative damping work and cumulative kinetic energy are superior
indicators because they identify unstable failure when compared to stable failures and give
a qualitative measure of the intensity of failure.
Otherindicatorsareaffectedbythesizeofthemodelsuchascumulativemeanunbalanced
force and contact softening. These indicators might be useful in conjunction with damping
work and kinetic energy to confirm instabilities. The performance of contact softening as a
viable stability indicator in the pillar tests was a surprising result given the highly variable
nature of contact softening in the EPC tests. Continued caution should be exercised when
using this indicator for anything but a damage indicator.
Grid based measurements showed that damping work and contact softening can not just
provide a picture of damage in the model but also support identification of instability by
cumulative indicators. The magnitude of local damping work, as tracked using the grid
based technique, can indicate instability when compared to stable failure. The localization
of damage, as shown with contact softening and damping work, further supports the deter-
mination of failure stability. During unstable failure, damage appears to localize along a
92
|
Colorado School of Mines
|
CHAPTER 5
UNSTABLE FAILURE IN AN IN SITU PILLAR MODEL
In underground coal mining, as areas are mined out, failure occurs on the edges of the
pillars, or ribs. As the excavated area increases and the pillar size decreases, the failed area
proceeds into the pillar. Stable or unstable failure of the rib material can occur while the
pillar as a whole retains load bearing capacity. In order for unstable failures to occur on the
rib of the pillar two conditions must be met. First, the material must fail, and second, the
loading system stiffness must be less than the material’s post-peak stiffness.
In this chapter, an in situ pillar (ISP) model is used to investigate failure of pillar ribs.
The model is first described in detail. Then the model is verified by comparing analytical
solutions to an elastic FLAC2D model and then to the ISP model. The coal material,
modeled using PFC2D, is mined in using a realistic mining sequence and failure of the pillar
near the rib is observed. Stability indicators are used to distinguish between stable and
unstable failure, namely, damping work, kinetic energy and mean unbalanced force. Spatial
measurements of damping work and contact softening are then used to support the stability
indicator results.
5.1 Model Description
A two-dimensional mechanically coupled DEM/FDM model similar to that of the pre-
vious chapters is utilized. The geometry and load application scheme is modified in order
to simulate an in situ pillar panel under development and then further mining. The failing
material is modeled using the displacement softening model (DSM) and the surrounding
mine and pillar core is modeled in FLAC2D. In situ stresses are installed to simulate a deep
coal mining scenario and the DSM material is slowly removed in order to simulate the min-
ing process. As the DSM material is mined, installed stresses redistribute and cause failure
94
|
Colorado School of Mines
|
to occur. The failure is characterized by tracking stress measurements and damping work,
kinetic energy, and mean unbalanced force.
5.1.1 ISP Geometry, Boundary Conditions, and Material Properties
Figure 5.1 shows the geometry and boundary conditions for the in situ pillar model. The
grey and blue areas indicate FLAC2D zones of different grid types and the yellow region is
the PFC2D assembly. The blue area, labeled FLAC2D Inner Grid, is a fine grid comprised
of square zones one-fifteenth of a meter on each side. This fine grid is intended to achieve
a high resolution of stress measurement near the PFC2D assembly and to comply with the
recommended coupling boundary ratio of four to five PFC2D elements to one FLAC2D zone.
The grey area labeled FLAC2D Outer Grid is graded outward to increase computational
efficiency and adhere to memory constraints. Directly above and below the inner grid, the
grid is graded only vertically, retaining constant zone width. To the right of the inner grid,
the grid is graded only horizontally, retaining constant zone height. In the remaining areas
the grid is graded in both directions. Listing F.17 shows the FLAC2D grid generation file.
The dimensions of the model are symmetric about the vertical center of the PFC2D
part. The top, left and right boundaries are fixed with a roller boundary while the bottom
boundaryispinned. Theredarrowindicatesthedirectionofexcavationandtheblackdashed
lines show the locations of two FLAC2D interfaces. The placement of the dashed lines is
exaggerated to indicate that the interfaces are located one zone width within the FLAC2D
grid. The PFC2D part is composed of twenty square pbricks stacked two high and ten wide.
ThematerialpropertiesandelementsizeofthePFC2DmaterialarethesameastheDSM
used in the previous chapters and shown in Table 3.2. The only difference is the number of
pbricks in the horizontal direction. The FLAC2D material is divided in two sections. They
are the surrounding rock and coal regions, both are elastic. The FLAC2D coal spans the
same height of the PFC2D coal and extends from the right edge of the PFC2D part to the
right boundary of the model. The zones in the FLAC2D coal region are assigned a shear
modulus, bulk modulus and Poisson’s ratio corresponding to the DSM elastic modulus and
95
|
Colorado School of Mines
|
Poisson’s ratio shown in Table 3.4. The surrounding rock material is assigned bulk and shear
moduli corresponding to an elastic modulus of 35 GPa and a Poisson’s ratio of 0.25. The
constitutive model used for the interfaces is a Mohr-Coulomb elastic perfectly plastic model.
The assigned properties are given in Table 5.1. A zero degree friction angle and low cohesion
are assigned to simulate a slick interface with low strength.
Table 5.1: FLAC2D interface properties
5.2 ISP Model Execution
Theuseofthediscreteelementmethodformodelingrockrequiresinitialstepstogenerate
the material and install in situ stresses. For coupled applications, where a continuum model
performsthefunctionofasurroundingrock, additionalstepsareneeded toinsertthePFC2D
material into its assigned region and bring the coupled system into equilibrium. During any
part of the initialization procedure, if unbalanced forces in the system are high, contact
bonds can be broken. Any damage inflicted upon the system at this stage is unrealistic and
should be minimized. Careful initialization of the model ensures that the expected DEM
material properties are retained.
Unlike the slender pillar model of the previous chapter, the in situ pillar model utilizes
in situ stresses for load application. During initialization, the free mining face creates an
opportunity for high unbalanced forces to destabilize the DEM system. In order to prevent
unnecessary damage to the system, the stresses are installed in the PFC2D part separately,
then the mechanical coupling is initialized and the coupled model is equilibrated with in-
stalled stresses. Then the left boundary of the PFC2D part is slowly released to create a free
mining face with minimal initial damage. Following model initialization, the PFC2D mate-
rial is deleted in thin slices to simulate the mining process. The following sections provide a
97
|
Colorado School of Mines
|
Figure 5.2: PFC2D stress installation screen shot, 16 MPa vertical stress target
execute the equilibration process in FLAC2D and Listing F.20 and Listing F.21 are used
for PFC2D. Additional custom files necessary for the in situ pillar (ISP) model runs in this
chapter are shown in Listing F.22 through Listing F.26
The final step in initializing the coupled model is to remove the pressure boundary on
the PFC2D part and bring the model to equilibrium. Listing F.27 shows the sequence of
commands to initiate this process in FLAC2D and Listing F.28 shows the commands for
PFC2D. The functions needed to excavate material and bring the model to equilibrium au-
tomaticallyareshowninListingF.25andListingF.26forFLAC2DandPFC2Drespectively.
The pressure must be removed gradually so that minimal damage is imposed due to sud-
den deconfinement. A pressure reducing function performs this task. Calling the function
bdry loop in both FLAC2D and PFC2D starts a pressure reduction process in which the
pressure is reduced incrementally and brought to equilibrium after each reduction step until
the pressure is reduced to zero. Then, finite strength is restored to the PFC2D element
contacts.
99
|
Colorado School of Mines
|
5.2.3 Excavation
Excavation then begins by calling the function slice loop in both FLAC2D and PFC2D.
Excavation proceeds by deleting elements to the left of an advancing mining face position.
Once a selection of elements is deleted, the model is cycled until equilibrium is achieved.
Then the face position moves forward one mining increment. The mining increment used
in this study is equal to the average element diameter. The single layer of FLAC2D zones
adjacent to the PFC2D model and under the interface are deleted as the mining face passes
by. The model is saved at 0.5 m mining face advance increments until the mining distance
limit is reached. In this study, eight of the ten meters of PFC2D material is mined.
5.3 Model Verification
To verify the performance of the coupled in situ pillar model, first, analytical solutions
for closure of a tabular excavation and associated abutment vertical stress are compared to
a FLAC2D model. Then closure and abutment stress is compared between elastic versions
of the coupled in situ pillar model and FLAC2D. Closure of the excavation and abutment
vertical stresses are compared for a tabular excavation span of 6 m at 630 m depth with a
lateral earth pressure coefficient of K = 0.3 and a surrounding rock elastic modulus of 35
E
GPa.
5.3.1 FLAC2D Measurements
The stresses and displacements are captured in each model using FLAC2D zones and
grid points respectively. Figure 5.3 shows the FLAC2D grid for the ISP model at the final
excavation stage. This figure includes an inset of the FLAC2D grid adjacent to the PFC2D
part of the model. The inset shows the locations of zones used for stress measurement and
grid points used for displacement measurement in the roof. In order to plot vertical stress
as a function of position, stress is averaged among the two zones adjacent a grid point at a
given position x. While only mine roof measurement locations are shown, a mirrored scheme
is used for the mine floor. The vertical stress presented for comparison is the average of the
100
|
Colorado School of Mines
|
roof and the floor stress for each x position. Closure is the sum of roof and floor grid point
displacement for a given position, x, where displacements toward the excavation are positive.
Figure 5.3: ISP model FLAC2D grid measurement locations
5.3.2 Closure and Vertical Stress in FLAC2D
Equation 5.1 is the closure of a tabular excavation at a distance x from the center of
e
the width of the excavation where the span is 2l , the in situ vertical stress is σ , and the
e v
surrounding rock has shear modulus G, and Poisson’s ratio ν. Figure 5.4 shows a rectangular
excavationinwhichthedimensionsarelabeledandtheclosureisdemonstrated. Thissolution
assumes plane strain conditions, that the closure at the edge of the excavation is zero, and
the extent of the rock in the vertical and horizontal directions is infinite. Abutment vertical
stress (σ ) into the abutment a distance x is given by Equation 5.2 Ozbay [62].
z a
2σ (1 ν)
s = v − l2 x2 (5.1)
G e − e
101
|
Colorado School of Mines
|
σ x
v a
σ = (5.2)
z
x2 l2
a − e
The tabular excavation is modeled in FLAC2D using a vertical line of symmetry about
the center of the excavation. Roller boundary conditions are applied to the grid edges and
the rib is fixed. A vertical stress of 16 MPa is used and lateral earth pressure coefficient of
K = 0.3 is used to generate the corresponding horizontal stress.
E
(a) Excavation dimensions
(b) Excavation closure
Figure 5.4: Static deformation of a tabular excavation in elastic medium
Figure 5.5 shows the comparison of closure for the analytical and FLAC2D solutions.
The difference between the FLAC2D and analytical closure at the center of a 6 meter wide
excavation is approximately 0.01% of the excavation height (2m). Figure 5.6 and Figure 5.7
show the vertical stress comparison for the FLAC2D and analytical solutions near the rib
and for the full width of the coal respectively. Figure 5.6 shows that the FLAC2D model
experiences a sharp increase in stress at the rib similar to the analytical solution, but not
as high in magnitude. Figure 5.7 includes two horizontal lines indicating the virgin stress
magnitude at 16 MPa. As the distance from the rib increases, the vertical stress decreases
to meet the virgin stress value.
102
|
Colorado School of Mines
|
Figure 5.7: Vertical stress comparison between FLAC2D and analytical solution, full model
span
Figure 5.8 shows closure for the FLAC2D and coupled ISP models for a 6 m by 2 m
tabular excavation. In these models, the rib displacement is not fixed, so the value of closure
at the rib is non zero. The closure values for each model are in close agreement indicating
that the DSM material satisfactorily simulates elastic displacement due to a given stress
field.
Figure 5.9 and Figure 5.10 show vertical stress in the FLAC2D and coupled ISP models
near the rib and for the full span of the coal respectively. Near the rib, vertical stress values
are similar for each model for respective depths. Each result exhibits an stress increase at
the coal rib. The coupled ISP model shows an unexpected decrease in stress at the rib,
where the FLAC2D model does not show this decrease. This inward movement of the stress
concentration in the coupled model is due to larger horizontal displacement of the rib in the
coupled ISP model resulting in slight rib destressing and therefore, inward movement of the
stress concentration.
Figure 5.10 shows the full span of the coal material. The plot terminates just after 16
meters because stress is the average of two adjacent zones. Because of zone gradation, the
104
|
Colorado School of Mines
|
zone adjacent to the the grid point near x equal 16 meters is nearly 4 meter in width. The
dotted lines in Figure 5.10 show the installed virgin stress state. As distance from the rib
increases, vertical stress approaches these values for each depth. The irregular stress pattern,
as measured in the FLAC2D zones, is due to roughness of the PFC2D assembly. This stress
pattern is periodic as a result of using identical pbricks to create the PFC2D assembly.
Figure 5.8: Closure comparison between FLAC2D and coupled ISP model solutions
5.3.3 Effect of the Coupling Boundary on Vertical Stress
Figure 5.10 shows a stress increase in the coupled ISP model near the right coupling
boundary. The stress increase is due to the mismatch of material behavior at the coupling
boundary and can influence results and interpretation of results if the area of interest is near
this boundary. In order to examine this effect in greater detail, elastic simulations for various
entry widths are performed. The model is initialized, then the PFC2D material to the left of
the appropriate excavation boundary is deleted, then the model is cycled until equilibrium
was achieved. Roof stress is then analyzed, as opposed to average roof and floor stress, so
detail in stress changes could be seen.
105
|
Colorado School of Mines
|
Figure 5.11 shows vertical stress in the roof for various entry widths. The right coupling
boundary is located at 10 meters. Comparing the rib stress for different entry widths shows
an incremental increase at all locations. At a entry width of 16 meters, the mismatch in
material behavior results in a lower than expected rib stress and higher than expected abut-
ment stress . Therefore, if the area of interest is within two to three meters of the coupling
boundary, analysis of model output should consider the effect of the coupling boundary.
Figure 5.11: Vertical roof stress for various entry widths
5.4 Results
Results are presented here for the ISP model with inelastic DSM material. As the exca-
vation proceeds material fails at the rib due to the redistribution of stress. Stress profiles
reveal the extent of failure by denoting the location of maximum stress for a given entry
width. Damping work, kinetic energy, and mean unbalanced force are utilized to detect
occurrences of unstable failure. Then, grid based, spatial measurements of contact softening
and damping work are used to support identifier and stress results.
107
|
Colorado School of Mines
|
5.4.1 Zone Stress Measurements
Roof stress profiles identical to Figure 5.11 are shown in Figure 5.12. As the excavation
widens, high stresses on the edge of the material cause failure and redistribution of vertical
stress inward, towards the unfailed portions of the pillar. As the entry widens, more stress
must be carried by the pillar so at the point of maximum stress the magnitude increases for
successive excavation steps.
Figure 5.12: Deep depth roof stress profiles for various entry widths
Thedegreeoffailureatspecificinstancesduringtheexcavationprocesscanbedetermined
from the stress profile in Figure 5.12. The degree of failure is indicated by comparing the
difference in vertical stress to the location of maximum stress. A low value of residual stress
on the pillar rib compared to the maximum stress indicates extensive failure. Whereas if the
pillar rib exhibited a greater amount residual stress, it would indicate that the material is
still capable of bearing load and therefore has been damaged to a lesser degree.
The degree of failure can be quantified by calculating the gradient of vertical stress from
the pillar rib to the point of maximum stress. The vertical stress gradient is calculated by
dividing the change in stress from the rib to the point of maximum stress by the distance
between the rib and the x position of maximum stress. Figure 5.13 shows the rib stress
108
|
Colorado School of Mines
|
these plots, the vertical lines are the excavation steps, when a layer of elements is deleted.
Since these plots are showing results from the same model, these mining steps are identical
in each plot. The spaces in between the excavation steps is are the number of steps required
to bring the simulation to equilibrium. There are 158 excavation steps in the simulation as
the mining face advances from x equals 4 to 6 meters. The lines showing the time step of
mining each have the same width. The appearance of thicker lines is an indication of several
mining steps occurring closely to one another.
ItcanbeseeninFigure5.14thattwoparticularminingstepsresultinsignificantincreases
in the cumulative damping work performed. The first is at approximately 6.8 106 steps and
×
the other is near 8.4 106 steps. In Figure 5.15, significant increases in the instantaneous
×
kinetic energy are present during these steps and also in Figure 5.16, large amounts of
unbalanceforcearepresent. Ascomparedtootherminingsteps, thenumberofstepsrequired
for equilibration and the increase in identifier value signifies that unstable failures of the rib
occurred at these locations.
Despite the fact that the mining increments are very small in distance, it is rational to
assume that there be some instability resulting from removal of elements. By comparing
a typical stable mining step and the first unstable mining step, a significant difference in
identifier behavior emerges. The stable step chosen is when the mining face is at x equals
4.709 meters, at timestep 6.678 106. The unstable mining step is when the mining face is
×
at x equals 4.772 meters, at timestep 6.784 106. Figure 5.17 shows the damping work in
×
the stable and unstable mining steps. The damping work accumulates during both mining
steps, although the increase in damping work in the unstable case is an order of magnitude
higher than the stable case. As seen in the SPCS tests, a large relative increase in damping
work can indicate unstable failure.
By plotting the incremental increase of damping work versus the mining extent in meters,
the magnitude of damping work performed during each mining step can be clearly compared.
Figure 5.18 shows the amount of damping work performed between excavation steps. Typ-
110
|
Colorado School of Mines
|
(a) Stable (b) Unstable
Figure 5.17: Damping work during a stable and unstable mining steps
ically, the amount of damping work performed during a mining step is below 5 kJ, but the
damping work at the two unstable mining steps at 6.8 and 8.4 million calculation steps cor-
respond to the high values near 4.8 meters and 5.65 meters. Figure 5.18 also reveals that
there is a second tier of instability with intensities between 5 and 20 kJ. It should be noted
that this value of energy is of a numerical nature and should not be considered as the amount
of energy associated with a real unstable failure. Additional work must be undertaken to
assess the accuracy of energy calculation using the ISP model.
The kinetic energy and mean unbalanced force also exhibit a difference in magnitude
between stable and unstable cases. However, aside from a change in magnitude, another
revealing difference between the stable and unstable cases is shown in kinetic energy and
mean unbalanced force. Figure 5.19 shows instantaneous kinetic energy in the unstable
and stable mining steps and Figure 5.20 shows the instantaneous mean unbalanced force.
For both cases, kinetic energy and mean unbalanced force increase immediately after the
elements are deleted. In the stable step, the model equilibrates steadily as shown by steadily
decreasing identifier value. In the unstable step, there is a secondary increase in indicator
value unrelated to the initial deconfinement due to mining. This failure results from the
mechanism for instability in which excess energy stored in the loading system is unable to
114
|
Colorado School of Mines
|
indicator trend. Region B indicates stable mining, Regions A indicates quasi stable mining,
and Region C indicates and unstable failure. Figure 5.18 shows that the damping work
magnitudes performed during the stable, quasi-stable, and unstable steps are in agreement
with failure stability interpretation from Figure 5.14, Figure 5.15, and Figure 5.16.
5.4.4 Grid Based Measurements
The previous section showed that stability indicators could detect the mechanism for
unstable failure and revealed a distinct difference between an example of a stable mining
step and an unstable failure resulting from a single mining step. By tracking the damping
work and contact softening using the grid based measurement technique, the damage and
instability in the model during each of these steps can be observed spatially.
Figure 5.21 shows the damping work before and after the stable and unstable mining
steps. Each of the images shows the rib region of the model, from x equals 4.8 to 7 meters.
For comparability, each of the images is shaded according to the same scale. The scale used,
from 0 to 160 J, is shown next to the image of the state before the stable mining step. Each
of the images shows a parabolic shaped area of failure. There are subtle increases in the
magnitude of damping work as mining progresses from the beginning of the stable mining
step to the beginning of the unstable mining step. The image showing the state after the
unstable mining step indicates that a large amount of instability occurs along the outer edge
of the parabolic damage region. Figure 5.22 shows the damping work after the unstable
mining step with the scale reset to resolve the larger pixel values. This image reveals a
possible new failure surface further into the pillar than the most inner failure surface seen
before the unstable mining step. Also, the magnitude of the damping work performed along
the new failure surface far exceeds the values seen in the unstable mining step in Figure 5.21.
Figure 5.23 shows the contact softening for the stable and unstable mining steps. The
shading scale is common to each image and is given next to the image depicting the model
state before the stable mining step. The maximum value is set to 0.5 m of contact softening
to highlight the pattern of damage. As mining progresses the softening of the rib exceeds
117
|
Colorado School of Mines
|
the maximum value on the scale and therefore the outer most pixels are white. The dashed
line adjacent each image denotes the edge of the measurement grid at x equals 4.8 meters.
Figure 5.23: Contact softening in the rib during the ISP test
Figure 5.23 shows that damage has accumulated near the rib in the form of planes of
failure which extend from the rib corners inwards, toward the vertical centerline of the
pillar. Before the stable mining step, one such failure plane is depicted along with a region
of damaged material near central part of the rib. After the excavation of the stable mining
step, damage accumulates in these two areas. In subsequent mining steps, the material at
the rib softens but there is no significant accumulation of damage along the failure plane, as
seen in the state of the model before the unstable mining step. After the unstable mining
step, the contact softening shows the formation of a new failure plane.
119
|
Colorado School of Mines
|
CHAPTER 6
CONCLUSIONS AND FUTURE WORK
To summarize the work in this thesis briefly, first, an appropriate DEM model for study-
ing compressive failure stability was chosen. Then a series of model behaviors were defined
to use as indicators of failure stability. These were evaluated during a series of pillar strength
tests and the most appropriate indicators were identified. Then the failure of an excavation
loaded under in situ, mining conditions was investigated using the indicators on a global and
also localized basis. In general, the numerical models behaved acceptably for the purpose
of studying unstable compressive failure in western U.S. coal and the methods used to dis-
tinguish between stable and unstable failure were successful. The following is a concise list
of conclusions on a chapter by chapter basis. Then a list of suggested future work is given
followed by some additional research questions inspired by this work.
Chapter Conclusions
Ch 2. Background Information on Unstable Failure in Underground Coal Mining
A need exists to ensure failure stability in deep underground western U.S. coal mines
•
due to the high probability and potential risks associated with unstable failure in
western U.S. coal mines.
A theoretical background for the mechanism of unstable compressive failure in brittle
•
rocks exists, but additional work is needed to include stable and unstable failure modes
in mechanistic numerical studies.
TheDEMcodePFCoffersfeaturessuchasemergentrocklikebehaviorsandanimplicit
•
time steping solution scheme that allows for multi-stage simulation of unstable failure.
121
|
Colorado School of Mines
|
To the best of the author’s knowledge no previous work successfully simulates unstable
•
compressive failure using a discrete element model.
Ch 3. Evaluation of Two DEM Models for Simulating Unstable Failure in Compression
The BPM and DSM described in Chapter 3 are two discrete element models that are
•
capable of simulating a western United States coal with post-peak softening behavior
for the purpose of studying compressive failure stability.
Currently, the BPM requires a ’black-box’ type of computer algorithm to determine
•
microparameters, because the combination of parameters necessary to define charac-
teristic post-peak behavior is not known.
The DSM requires an iterative calibration that can be conducted manually. The key
•
microparameter influencing the post-peak softening behavior is the contact plastic
softening limit, U .
pmax
Triaxial tests results revealed lower than desired friction angle for the BPM, an ex-
•
pected result. The DSM triaxial tests showed a higher than desired friction angle.
The DSM model showed more consistent behavior during the failure stability (EPC)
•
tests in that post peak behavior remained consistent for stable failures than did the
BPM model. Furthermore, the transition from stable to unstable failure mode with
various loading system stiffnesses was more defined with the DSM, while the BPM
exhibited a fairly large quasi-stable region.
The BPM exhibits a clear dependency of post-peak softening on loading rate. For
•
lower loading rates, the BPM post-peak stiffness increases in magnitude.
The DSM exhibits an effect on post-peak behavior for different loadings rate, however
•
the general softening characteristic of the material is retained.
122
|
Colorado School of Mines
|
The DSM is a more appropriate DEM to use in studying failure stability than the
•
BPM based on consistency of behavior in stable and unstable failure mode and the
independence of DSM post-peak softening by loading rate.
Chapter 4. Indicators of Unstable Compressive Failure in DEM Coal Strength Tests
Cumulative indicators better represent the failure of the model because they embody
•
information from the entire failure rather than from one calculation step as was the
case with maximum instantaneous values. Although, trends of instantaneous values
also indicate the behavior in the model.
In both the EPC tests and the SPCS tests the damping work and kinetic energy
•
differentiated between stable and unstable failure and provided a qualitative indication
of the magnitude of failure.
Some indicators are affected by the size of the model as shown in SPCS tests. The
•
mean unbalanced force, for example appears to decrease as model size increases. So,
this indicator should be used in conjunction with damping work and kinetic energy.
The contact softening indicator does not clearly distinguish between stable and unsta-
•
ble failure when analyzed globally. However, this indicator could be used to provide
information on location and extent of damage in the model.
Grid based measurements for damping work and contact softening showed higher local
•
valuesforunstablefailuresandsimilarvaluesforstablefailuresanddepictedthefailure
patterns in the models.
Chapter 5. Indicators of Unstable Compressive Failure in DEM Coal Strength Tests
The stress gradient after arbitrarily chosen mining steps suggested increased possibility
•
of unstable failures as the excavation is expanded to exceed four times the excavation
height.
123
|
Colorado School of Mines
|
When instabilities occur, an increase in indicator value will occur that is independent
•
of the initial removal of elements. The mean unbalanced force, damping work, and
kinetic energy indicate two significant unstable failures with increased magnitude of
values after increasing the excavation width beyond four times the seam height..
Plots of damping work, kinetic energy, and mean unbalanced force versus step show
•
that fewer steps are required to equilibrate the model during stable mining. A larger
number of steps are needed to equilibrate the model when unstable failure occurs.
As revealed by the grid based measurements, single mining steps can result in the
•
initiation of significant unstable failures.
Future Work
TheDEMmodelsandstabilityindicatorsinthisthesisareapplicabletoinvestigatingspecific
mechanisms of unstable failure and conditions that influence them. By changing existing
model parameters the identifiers can be used to potentially study the effect mine conditions
haveontheintensityandfrequencyofunstablefailures. Inthiscontext, additionalnumerical
analysis should be conducted on the following topics:
The effect of the coal/mine contact condition
•
TheFLAC2DpartofthemodelinChapter5containsaninterfacewithMohr-Coulomb
strength properties with perfectly plasticity. This interface is intended to simulate
the contact condition between the coal and a competent adjacent rock. While it is
difficulttodeterminetheactualmaterialpropertiesofdiscontinuitiesinminestheeffect
of different idealized discontinuity behaviors on unstable compressive failure can be
evaluated. For example, using the Mohr-Coulomb with perfect plasticity interface, the
strength of the discontinuity could be changed to simulate various levels of horizontal
confinement on the coal due to contact conditions. By improving the constitutive law
124
|
Colorado School of Mines
|
of the effect of unstable slip along the interface could considered. Unstable failure
researcher Gu demonstrated that a discontinuity with a softening post-peak behavior
cansimulatestableandunstableslip, analogoustocompressivefailurestabilitycriteria.
By modifying the discontinuity plasticity law in FLAC2D by means of user defined
FISH function to include a softening post-peak behavior the effect of unstable slip on
compressive unstable failure could be studied [28].
Depth of the mine
•
It is widely agreed upon that unstable failure is more probable as depth of the mining
activity increases. By initializing a series of ISP models at different depths. The effect
of depth on frequency and intensity of unstable failure can be evaluated.
Various types of coal
•
Thepost-peakbehaviorofcoaliskeptconstantthroughoutthisstudy. UsingtheDSM,
coal materials with different levels of brittleness can be calibrated and tested under
similar conditions. Both EPC and ISP tests on these coals could serve to confirm
Cook’s stiffness stability criteria concept on a theoretical basis.
Mining rate
•
In this thesis, the criteria for model equilibrium is set to simulate the onset of static
equilibrium. The mining rate in actual coal mines is known to effect mine stability
[90].
Pillar design schemes
•
Various combinations of pillar sizes are used to offer support in gateroad entries in
longwall coal mining. The ISP model provides an opportunity to study the effects of
pillar design and pillar loads on stability in these entries.
Additional Research Questions
125
|
Colorado School of Mines
|
Failure localization
•
An interesting result that arose from this study is that of the failure localization due
to unstable failure. In chapters four and five the grid based measurements of damping
workandcontactsofteningshowedamoredispersedtypeoffailureresemblingcrushing
for stable failures and localized failure along planes, resembling shear bands, for un-
stable failures. This behavior is more prevalent in BPM models as compared to DSM
models [49]. This result suggests that a different failure mechanism is in effect when
failure is unstable. DEM models hold promise in studying the failure mechanism due
to their micromechanical nature. Effects of model properties on failure pattern such
as particle assembly, particle size, and contact and bonding models should be system-
atically tested to determine the nature of failure localization in DEM. If not already
sufficiently performed, laboratory testing could reveal if there is a physical analogue.
Alternatives to the DSM for studying unstable failure
•
The BPM has been used widely to simulate the failure of rock because it exhibits
physical properties, such as increased strength with confinement and the Poisson ef-
fect, of rock with out the explicit assignment of such properties [68][5]. The difficulty
in calibrating a velocity dependent post-peak behavior in the BPM, in part, lead to
the selection of the DSM for the work in this thesis. The DSM exhibits undesirable
properties, such as an unrealistically high friction angle and high Poisson’s ratio. How-
ever, the ease at which post-peak softening is calibrated is key to simulating unstable
behavior in in situ loading conditions. Improvements should be made on these exist-
ing models to cope with their respective drawbacks by closely examining the effect of
contact laws on post-peak behavior. A simple comparison between BPM and DSM
suggests that some form of softening behavior must be in action on the contact level.
Alternatives to the these contact models should be thoroughly reviewed and possible
usage of other numerical methods should be considered.
126
|
Colorado School of Mines
|
cp buf(2) = mv H 2.0 md ravg ; rectangle height
− ∗
cp bufn = 2
cp write
;
cp bufn = 3
cp read
cpp nseg = cp buf(1)
cpp yoff = cp buf(2)
cpp rad = cp buf(3)
cpp nseg0 = cpp nseg
;
; cpp excavate
slp init EPC
slp getwhat = 0 ; coords
slp getlist
cbi init
;
command
SET dt dscale ; Assumes that FLAC is running in static mode (
−
default).
; By making PFC2D also run in static mode, we insure
−
; that the displacements during one step in each
code
; will be the same. Static mode means timestep of
−
unity ,
; so velocities have units of [meters/step ].
SET fishcall 0 cpp getvel
SET fishcall 3 cpp putfor
SET fishcall #FC BALL DEL cpp delball
SET fishcall #FCNEWQUIT cpp delballremove
end command
;
cpp coupled view
oo=out(’ Coupling scheme successfully initialized . ’)
∗∗∗
end
;
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
def cpp cyc calm
;
; Controls synchronous cycling between PFC2D and FLAC. PFC2D is
−−−−−
the
; controlling process such that when FLAC is in slave mode, calls
−
to
; [cpp cyc] from PFC2D will force both codes to take one step .
; The coupling scheme assumes that cycling occurs only by calling
; [cpp cyc] from PFC2D. DO NOT ISSUE CYCLE COMMANDS DIRECTLY AND
; DO NOT TYPE ESCAPE WHILE CYCLING IN EITHER CODE!
;
162
|
Colorado School of Mines
|
APPENDIX B - INVESTIGATING THE EFFECT OF INTERNAL STRAIN ENERGY
ON POST-PEAK BEHAVIOR OF THE BPM AND DSM
In response to the results of the LRC tests, an additional study was conducted in order to
test a possible explanation for the velocity dependent post-peak behavior. The interaction
between internal strain energy and strength increase due to confinement could possibly lead
to the observation of different levels of characteristic softening under different loading rates.
Stored strain energy accrued during elastic deformation might lead to instability in the post-
peak region of the DEM material if the post-peak characteristic is such that the strain energy
cannot be absorbed during failure. At the moment of failure, this loading condition would
lead to sudden failure with no post-peak softening. Although, if the loading rate is high, the
amount of work added to the system each step may exceed the amount of energy released.
If so, a higher material strength in the post-peak region could be observed due to increased
confinement. In other words, a post-peak softening characteristic would be observed that is
not a characteristic property of the material, but dependent upon the loading condition.
Three specimens are subjected to a modified UCS test to investigate the effect of elastic
strain energy on DEM stability in the post-peak region. Here, a UCS test is conducted as in
Chapter 3.4, but here, the specimen is loaded just beyond failure and loading is halted when
the vertical stress on the specimen is ninety five percent of the strength. At this point the
model is cycled in order to determine stability of the specimen. If no change in stress occurs
then the specimen is stable, if the stress continues to drop then the specimen is unstable.
The three specimens tested are the two DEM specimens from Chapter 3 and a recalibrated
DSM to with a steeper post-peak softening curve. First, the results of the tests on the DEMs
from Chapter 3 are discussed then the results for the two DSM tests are discussed.
FigureB.1showsthestressversusstepcurvesforthemodifiedUCStestonthetwoDEMs
from Chapter 3. The black dot on each line designates the point where loading is halted.
169
|
Colorado School of Mines
|
The DSM model shows a slight decrease in stress indicating a brief period of instability. The
BPM model shows shows a greater decrease in stress. The small amount of instability in
the DSM model reflects the material’s ability to regain stability after some energy release,
whereas the instability of the BPM specimen shows that stability is not regained.
Figure B.1: Stability test stress-strain curves for Chapter 3 DEM models
The stability concept tested in Chapter 3 using the EPC tests states that if the energy
stored within the loading system at the point of failure cannot be absorbed by the specimen
then the failure will be unstable. During unstable failure, the characteristic post-peak be-
havior of the material is hidden, and the strain measurement taken at the platen-specimen
boundary reflects the rebound of the platens. In the case of the BPM tested here, when
loading is halted, work is no longer done on the system by the loading mechanism. If the
failure is unstable, another source of energy must be acting on the system. Elastic strain
energy in the BPM specimen could cause unstable failure if the specimen is not able to
dissipate all of the stored energy during the failure process.
DEM material is not perfectly linear in the elastic region or in the post-post peak. Al-
though a simplification of the behavior is useful in illustrating a possible mechanism for
170
|
Colorado School of Mines
|
failure stability of DEM specimens under rigid loading conditions. Figure B.2 shows a
schematic of a stress strain curve where the elastic region and post-peak region are made
linear for simplification. The hatched area U is stored elastic strain energy up to the point
E
of failure. The unhatched region, U , can be thought of as the capacity of energy storage in
C
the specimen during failure. In order for failure to be stable, U < U . If U > U , then
E C E C
| | | |
the available energy is greater than the material’s capacity to store the energy and failure
will be unstable.
Figure B.2: Strain energy regions for linear elastic material with linear softening
Figure B.3 shows two stress-strain curves, one for the DSM from Chapter 3 and one for
the recalibrated DSM, labeled DSMr. The DSMr curve is calibrated to have similar elastic
modulus and strength, but with a steeper post-peak curve. The steeper post-peak curve
reflects a post-peak behavior in which less energy can be absorbed during failure than by the
DSM specimen. The curves in Figure B.3 are both non-linear in both elastic and post-peak
regions. So, making an exact determination of available strain energy versus energy storage
capacity would require a determination of elastic unloading behavior at any given point in
the post-peak curve. Although, an estimation on the likelihood of failure stability can be
171
|
Colorado School of Mines
|
made based on the energy criteria above. DEM materials with shallow post-peak softening
compared to the elastic modulus will most likely be stable under rigid loading, materials
with steep post-peak softening will likely be unstable and material with post-peak softening
approximately equal in magnitude to the elastic modulus are questionable and should be
closely examined. By approximating the post-peak behavior versus the pre-peak behavior
in Figure B.3, it is likely that DSMr will be unstable under rigid loading conditions.
Figure B.3: DSM characteristic stress-strain curves
Figure B.4 shows two stress-strain curves, one for the DSM and one for the DSMr. The
black dots show the point at which loading is halted and the model is cycled to test stability.
Figure B.4 shows that the DSM has a partial instability and the DSMr material fails com-
pletely after loading is halted and the model is cycled. Total failure of the material indicates
that the DSMr is unstable under rigid loading. The instability of the DSMr specimen sup-
ports the claim that internal strain energy magnitude in reference to the post-peak softening
characteristic plays a significant role in modeling rock behavior with DEM.
172
|
Colorado School of Mines
|
endif
; Total
SE pb = SE pbn + SE pbb + SE pbs
else
SE pb = 0.0
endif
; Total Strain Energy at contact cp
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
cp se = SE cp + SE pb
end
def pfc sof
if c model(cp) = ’udm softening ’ then
if c prop(cp, ’ sof broken ’) = 0
cp sof = c prop(cp, ’ sof softened ’)
; ratio of amt. yielded to yield limit , 0 = no
yield
sof nbcnt = sof nbcnt + 1
dumA = gridcell cont nbcount(cell indexY ,
cell indexX)
dumA = dumA + 1
gridcell cont nbcount(cell indexY , cell indexX)
= dumA
endif
else
cp sof = 0.0
endif
sof tot = sof tot + cp sof
end
def pfc wd
bp wd = 0.0 ;This is the amount of work done in one step on one
ball
;and it should be zeroed before pfc wd
calculates the work
; in order to get a cummulative value
for each grid cell .
;
; Incremental values can be computed by
zeroing the grid array
; after each step , but this is not wise
because the histories for
;each grid must be recorded every step
for the values to be useful
; quantitatively . Otherwise , they can be
a qualitative indicator .
;
181
|
Colorado School of Mines
|
history 501 syyf1 1
history 502 syyf3 1
;
; history # syyrn 1 ; Lines deleted for brevity , fill in
; ; all history syyfn 1 commands
;
history 596 syyf191 1
history 597 syyf193 1
history 598 syyf194 1
history 599 syyf195 1
history 600 syyf196 1
history 601 syyf197 1
history 602 syyf198 1
history 603 syyf199 1
history 604 syyf200 1
history 605 syyf201 1
;
history 701 rdL 1
history 702 rdL 3
;
; history # rdL n ; Lines deleted for brevity , fill in
; ; all history rdL n commands
;
history 796 rdL 191
history 797 rdL 193
history 798 rdL 194
history 799 rdL 195
history 800 rdL 196
history 801 rdL 197
history 802 rdL 198
history 803 rdL 199
history 804 rdL 200
history 805 rdL 201
;
history 901 fdL 1
history 902 fdL 3
;
; history # fdL n ; Lines deleted for brevity , fill in
; ; all history fdL n commands
;
history 996 fdL 191
history 997 fdL 193
history 998 fdL 194
history 999 fdL 195
history 1000 fdL 196
history 1001 fdL 197
history 1002 fdL 198
261
|
Colorado School of Mines
|
ABSTRACT
Knowledge of geomechanical properties is beneficial if not essential for drilling and completion
operations in the oil and gas industry. The Unconfined Compressive Strength (UCS) is the max-
imum compressive force applied to cylindrical rock samples without breaking under unconfined
conditions. Unconfined Compressive Strength (UCS) is one of the key criteria to ensure safe, effi-
cient, and successful drilling operations, and estimation of UCS is vital to avoid wellbore stability
problems that are inversely correlated with the pace of drilling operations. Furthermore, UCS is
an essential input to ensure the success of completion operations such as acidizing and fracturing.
Different methods are available to estimate UCS. The common practice to estimate UCS is to
conduct experiments with a laboratory testing setup. These laboratory experiments are considered
the most accurate way to measure UCS, but they are destructive, time-consuming, and expensive.
Alternatively, empirical equations are derived to estimate UCS from well-logging tool readings.
These empirical equations are generally derived from physical properties such as interval transit
time, porosity, and Young’s modulus. However, most of these equations are not generic, and their
applicability for other formation types is limited.
The limitations of existing methods to estimate UCS promoted the development of data-driven
solutionstoestimateUCS.Thedata-drivenmethodsincludebutarenotlimitedtobasicregression,
machine learning, and deep learning algorithms. Data-driven methods to identify patterns in the
datatoestimategeomechanicalparametersareconsideredtobeimplementedfordrillingoperations.
This study proposes methods to assist safe and successful drilling operations while eliminating
the need for coring, saving a vast amount of time and money by estimating UCS from drilling
parameters instantaneously. The goal is to develop a machine-learning algorithm to analyze and
process high-frequency data to estimate UCS instantaneously while drilling, allowing safer and
more efficient drilling operations.
The drilling data used to train, validate, and test the machine learning model is re-purposed
from data collected during drilling in a previous study. The algorithm consists of a data processing
method called Principal Component Analysis (PCA) to indicate the importance of each parameter
by quantifying their variance contribution. Random Forest machine learning algorithm is utilized
iii
|
Colorado School of Mines
|
ACKNOWLEDGMENTS
I want to acknowledge the help of people and organizations that made this research possible by
contributing to my research, studies, and daily life.
First and foremost, I would like to thank my sponsor, MTA (General Directorate of Mineral
Research and Exploration of Turkey), for financially sponsoring me through my studies to achieve
excellence in the exploration of Geothermal resources in Turkey.
I want to remind the memories of Dr. Tutuncu and thank her for opening the doors of CSM
to me. She will always be remembered for her legacy here at CSM. I will remember her as a
hard-working, deeply knowledgeable, kind, and lovely advisor.
I want to thank Dr.Eustes for supporting me through all my studies. His trust in me motivated
me to complete my research. I want to thank him for being there to answer my questions and for
his wisdom.
I want to express my most profound appreciation to my committee members for their help and
for making this research possible.
I am honored to gain an opportunity to learn from professors and staff here at CSM. I would
like to thank Dr. Eustes, Dr. Ozkan, Dr. Prasad, Prof. Crompton, and Dr. Fleckenstein for their
classes.
I would like to thank all the Petroleum Engineering Department members, especially Denise
and Rachel, for their support throughout my studies.
Sincere thanks to my friends, Hazar, Deep, Santiago, Mansour, Mansour Ahmed, Val, Gizem,
and Roy, for supporting me through all hardness. Thanks to you guys, I look at life from a broader
perspective.
Special thanks to Nehra for her company, love, and support that motivated me to complete my
studies and made my life better in every way. Thank you for being there for me.
Finally,Iwouldliketothankmyparents,AlimeandAbdullah,forbeingmyteachersthroughout
my life. Everything I achieved was thanks to you two.
xiii
|
Colorado School of Mines
|
CHAPTER 1
INTRODUCTION
Historical data has always been an essential part of the oil and gas industry. The industry
has become data-intensive with the recent advancements in data collection due to more durable
and reliable sensors. However, the amount of data utilized to improve the efficiency of future
operations is still a fraction of the data collected. The oil and gas industry is becoming more aware
of the potential uses of these data. The utilization of data is being recognized as the most efficient
method for reducing cost by increasing operational efficiency and creating safer, more sustainable
developments (Løken et al. 2020).
Specifically, the drilling industry has started investing more into the automation of drilling
operations due to efficiency, safety, and cost concerns. In the last decade, the increasing computa-
tional powers and the digitization of rig parts have allowed the industry to utilize machine learning
algorithms (ML) for most drilling operations. By implementing data-driven solutions through ML
algorithms,theindustryisworkingonbuildingautomateddrillingsystemsthatcanconductdrilling
operations without human input or recommend an efficient solution for the safety of operations.
One objective of the drilling industry is to increase the efficiency of drilling operations by reducing
capital and operational expenses with the implementation of data-driven solutions. Knowing sub-
surface conditions and geomechanical properties is essential to achieving this objective. Especially
by gaining more knowledge about geomechanical properties, wellbore stability can be improved
while drilling by avoiding hole collapse, stuck pipe, tight hole, kicks, and loss circulation.
Drilling parameters have been recognized and used as an indicator of formation parameters,
and estimating geomechanical properties from drilling parameters has been stated as an important
topic, with studies conducted since the late 1950s (Combs 1968; Cunningham and Eenink 1959).
Early studies completed by Bourgoyne and Young (1974) showed that pore pressure could be de-
termined from drilling parameters with 1 lb/gal standard deviation on Gulf Coast, and Majidi
et al. (2017) observed similar results in estimating formation pore pressure from MSE and drilling
parameters from a study with a similar intent. Some of these models for pore pressure estima-
tion by Jorden and Shirley (1966) as well as Rehm and McClendon (1971) are still being used in
1
|
Colorado School of Mines
|
the industry; thanks to their practicality. The objective of these studies is to indicate the impor-
tance of estimating formation parameters fo and efficiency of the operations. Likewise, Unconfined
Compressive Strength (UCS) has been known as an essential parameter as it is a key input to
avoiding possible wellbore failures by implementing a robust mud weight window and deciding an
aggressiveness of bit (Nabaei et al. 2010).
The Unconfined Compressive Strength is the maximum compressive strength that a cylindrical
rock sample can withstand under unconfined conditions. The UCS is also known as a uniaxial
compressive strength because the compressive stress is applied along only one axis while measuring
therockstrength. Theimpactofrockhardness,alsoknownasrockstrength,ondrillingperformance
has always been an important issue and has been investigated since the early 1960s (Spaar et al.
1995). Inaddition,unconfinedcompressivestrengthisoneoftheessentialparameterswhendeciding
on bit aggressiveness (Spaar et al. 1995).
Theearlystudiesindicatedastrongcorrelationbetweenrockhardnessanddrillingperformance,
and it is also observed that other drilling parameters such as weight on bit, revolution per minute,
and bit type are required to predict the drilling efficiency among rock hardness measurements
(Gstalder and Raynal 1966). The study indicates that estimation or measurement of UCS is
essential to avoid wellbore stability problems while drilling. In addition, a study conducted by
Spaar et al. (1995) shows a strong correlation between the formation drillability with UCS and
friction angle as these parameters are essential for bit selection and the selection of appropriate
aggressiveness for a bit can improve overall drilling performance substantially.
The empirical equations derived from well logging tool readings and rock strength tests run
in laboratory conditions are the most common methods to estimate UCS. However, data-driven
solutions to estimate these parameters are becoming more common as these methods are getting
more robust thanks to studies conducted to observe their veracity and versatility with available
geomechanical and drilling data. Also, an exponential increase in the number of drilling operations
conducted in unconventional reservoirs brought the need for a more sophisticated and cheaper
method to estimate geomechanical parameters as these reservoirs commonly have non-linear be-
havior and coring in a horizontal section of the wells drilled through the unconventional reservoirs
is harder to conduct. These reasons indicate a need for a faster and cheaper method to estimate
geomechanical parameters.
2
|
Colorado School of Mines
|
1.1 Motivation
This thesis proposes to build a data-driven solution to estimate UCS instantaneously from
drilling parameters by utilizing Random Forest regression model. The reviewed studies conducted
by a vast amount of scientists indicate that the data-driven methods can improve the efficiency of
operations in various ways by introducing sophisticated solutions such as predicting ROP (Hegde
et al. 2015), estimating drilling optimization parameters (Nasir and Rickabaugh 2018), indicating
the development of dominant water channels (Chen et al. 2019), predicting casing failures (Song
and Zhou 2019), and predicting possible drilling incidents (AlSaihati et al. 2021). This study is
solely motivated to provide key input parameter to avoid potential wellbore stability problems and
drilling accidents by indicating rock strength changes within the formation. Furthermore, the final
model from this study can be adjusted and developed to integrate into a fully automated drilling
system as instantaneous UCS pattern indicator, which will be one of the essential steps for the next
level of drilling automation.
1.2 Objectives
The main objectives of this thesis are:
• Develop a machine-learning model that can be trained to estimate unconfined compressive
strength from drilling parameters instantaneously.
• Provide changes in Unconfined Compressive Strength based on drilling parameters instanta-
neously to avoid possible wellbore failures and drilling accidents.
• Utilize principal component analysis to analyze feature importance using available drilling
data.
• StudytheimplementationofRandomForestregressionalgorithmtobuildarobustregression
model to estimate certain geomechanical parameters (i.e., UCS, and Mechanical Specific
Energy).
1.3 Thesis Organization
This thesis consists of six chapters. The summary of each chapter is presented as follows:
3
|
Colorado School of Mines
|
CHAPTER 2
OVERVIEW
2.1 The Unconfined Compressive Strength
Theunconfinedcompressivestrengthcanbedefinedasthemaximumamountofforceacylindri-
calrocksamplecanwithstandunderunconfinedconditions. Themostcommonmethodstoestimate
UCS are laboratory experiments and empirical equations derived from well logging tool readings.
The laboratory experiments are conducted by using a testing setup that measures the maximum
stress a sample can withstand; this method is referred to as a direct method to measure UCS,
and the empirical equations derived from well-logging tool readings are referred to as an indirect
method to estimate UCS (Ceryan et al. 2013). The American Society for Testing and Materials
(ASTM) and the International Society for Rock Mechanics (ISRM) standardized the laboratory
testing procedures that should be followed to estimate UCS. The laboratory testing procedures
can vary with respect to stress distribution created around the sample. These laboratory testing
methods use compressive, tensile, and triaxial stress distributions for different scenarios. The most
common test method to measure the rock strength is called the uniaxial compressive test method,
which is determined by applying compressive stress to the sample vertically until the sample fails
(Brook 1993). The stress distributions for different test methods are shown in Figure 2.1.
Themostcommonlimitationwhileconductingtheselaboratoryexperimentsisthequalityofthe
coresample, asthebrokenorchippedsamplewillresultinachangeinstressdistributionwithinthe
sample. The standardization of core sample preparation and test procedures by ASTM includes
a detailed description to ensure accurate measurements. For example, according to American
Society for Testing and Materials (2014), the minimum diameter of the test sample should be
approximately 47-mm, and the length to diameter ratio of the test sample should be between 2.0:1
and 2.5:1 to satisfy the criterion in the majority of cases.
As mentioned before, in addition to laboratory experiments, several empirical equations were
derived to estimate the rock strength, as coring operation required to obtain rock samples for these
experiments is time-consuming and expensive. The empirical equations can vary to the parameter
5
|
Colorado School of Mines
|
Most laboratory testing methods used to estimate the UCS are accurate if the test sample fits
within the predetermined standards, but these methods are destructive and time-consuming. On
the other hand, the applicability of empirical equations to estimate UCS is limited. The empirical
equations could be a better option if the time is limited. However, the accuracy of empirical equa-
tions is as good as the accuracy of the well-logging tool readings, which can introduce inaccuracy
to the estimations. The limitations of laboratory testing methods and empirical correlations moti-
vated the industry to develop an additional method to estimate UCS. With this motivation, the oil
and gas industry has started to conduct studies to implement data-driven solutions as a suitable
option, among other methods, to estimate parameters that require significant time and budget to
measure (i.e., UCS). Especially, gaining even a limited amount of knowledge about UCS became
necessaryasUCSchangescouldhelpindicatepotentialwellborestabilityproblemsinadvance. The
wellbore stability and drilling problems cost the industry a vast amount of money every year since
wellbore stability problems can lead to stuck pipes, stuck tools due to differential sticking, and
excessive mud losses due to tensile fractures. Some of these wellbore stability problems, such as
tensilefracturesandabreakout, canoccurduetoimpropermudweightwindow(i.e., excessivemud
weight, low mud weight). Excessive mud weight can lead to tensile fractures, which will cause loss
circulation and increase the chances of differential sticking, whereas stress around the wellbore will
cause breakout if the mud weight can’t withstand the compressive strength of the rock (Al-Wardy
and Urdaneta 2010).
The study conducted by Al-Wardy and Urdaneta (2010) indicated that the time required to
deliver a well located in North Oman could be reduced from 36.8 days in 2009 to an average
of 30.1 days in 2010 by understanding the geomechanics of the field. A better understanding of
geomechanics is achieved by building a geomechanical model of the particular area and complet-
ing wellbore stability analysis by using vertical stress values from density logs, elastic properties
and rock strength through DSI logs, minimum and maximum horizontal stress values from Min-
frac/XLOT data, and stress orientation from BHI image logs. In addition to fracture tests and
datacollectedfromlogs, theavailableformationanddrillingparameterssuchasporepressure, mud
weights, drilling reports, and wellbore trajectory are used to complete the geomechanical model
and wellbore stability analysis. This study showed that understanding geomechanics of even a par-
ticular area of a field can help reduce wellbore stability problems considerably and improve overall
7
|
Colorado School of Mines
|
drilling performance. Also, in the study, it is stated that there is a sensitive correlation between
the wellbore stability problems with UCS and minimum horizontal stress.
A similar study was conducted by Klimentos (2005) to optimize drilling performance by provid-
ing optimum drilling parameters and to estimate pore pressure values and wellbore stability plan
through a geomechanical model. The study focuses on optimizing drilling performance in deep-
waterwells,especiallywhiledrillingshalyformation. Toachievetheobjectiveofprovidingoptimum
drilling parameters, the initial Mechanical Earth Model (MEM) is developed by using well logging
tool readings, mud-logs, and drilling information. Then, a proper match between logs is completed
to indicate the lithology and porosity of sections. Later, the overburden stress is estimated by
integrating density log readings with the MEM. The exponential extrapolation model is used to
estimate the missing values for the sections where density log readings were missing to study the
overall rock strength through in-situ stresses from overburden and pore pressure. After combining
previous efforts with estimated pore pressure of shaly formation through compaction theory and
pressure/sonic log readings, the final MEM is completed. It is indicated that the determination of
MEM and using optimum mud weight windows minimized washouts and loss circulation, and it
allowed them to understand better the necessity of using casing strings. The results indicated that
drilling performance could be improved by understanding the geomechanics of formation as the
number of days to drill and construct a well was reduced by 15 days, and $4+ million was saved
on total drilling cost.
ThestudiesontheimportanceofgainingknowledgeaboutUCSondesignphasesofdrillingand
completion operations are conducted by Brehm et al. (2006) and Al-Awad (2012). Brehm et al.
(2006) completed a case study on Shenzi Field regarding the anisotropic behavior of the formations
and the impacts on wellbore stability. Then, Al-Awad (2012) conducted a study that focused on
the simple correlation between UCS and apparent cohesion, and throughout the study, impacts of
wellbore stability issues as a result of lack of accuracy on rock strength estimation are included.
Furthermore, comparison studies are conducted to question the accuracy of empirical equations to
prove the versatility and veracity of these solutions. The research conducted by Meulenkamp and
Alvarez (1999) compared the performance of empirical equations and used Machine Learning (ML)
algorithmstoestimateUCSofdifferentrocksamplestoindicateifMLcanbeavaluabletoolforthe
industry. Then, Changetal.(2006)conductedastudyontheaccuracyofempiricalequationsusing
8
|
Colorado School of Mines
|
a vast range of data and studied the applicability of 31 different empirical equations. Yurdakul
et al. (2011) also conducted a similar study comparing the accuracy of the simple regression model
and Artificial Neural Network (ANN) model in estimating UCS of sedimentary rock samples from
17 different regions of Turkey.
Furthermore, Barzegar et al. (2016) expanded the coverage of the study and compared the per-
formanceofdifferentMachineLearning(ML)andDeepLearning(DL)models. Later, Negaraetal.
(2017) introduced elemental spectroscopy to the prediction model and searched for the potential
impact of grain size on UCS by using the ML and DL model. The reviewed studies showed that
data-driven solutions are becoming more robust.
Brehmetal.(2006)completedacasestudyonthewellborelocatedattheShenziFieldinGreen
Canyon blocks 653 and 654 regarding the wellbore stability issues. The study focused on building
a complex geomechanical model for wellbores where the main problems are anisotropic failure
and lost circulation. The study indicates the importance of using complex and comprehensive
geomechanical models while drilling the wellbore with weak rocks and overpressured zones. This
combination could limit the available mud weight window, and cause wellbore stability problems if
the geomechanical model does not explain these phenomena in detail. The study also states that
basic geomechanical modeling, built using the earth’s mechanical properties and in-situ earth’s
effectivestressesoftheregion,bringsanovergeneralizedapproachtowellborereactionwhiledrilling
a formation where anisotropic failure occurs as a consequence of weakly bedded rocks. It is also
statedthatthesebasicgeomechanicalmodelscanbeimprovedtounderstandwellborefailuresbetter
if it is applied correctly by building the model with accurate data. The complexity of these models
is directly correlated with the accuracy quantified of the mechanical properties (i.e., pore pressure,
in-situ stress magnitudes, stress orientation, rock strength). Further discussions showed that these
wellbore stability problems could be avoided if the mud weight used while drilling at Shenzi was
updated based on the anisotropic behavior of shale reservoirs. The model built includes changes in
in-situ stress magnitudes, stress orientation, and UCS estimations. The results indicated that the
significant wellbore stability and lost circulation problems in previous exploration operations are
substantially reduced and turned into manageable drilling problems.
Al-Awad (2012) conducted research focusing on the correlation between UCS and the apparent
cohesion of rocks. The study also points out how important it is to know rock strength, aka UCS,
9
|
Colorado School of Mines
|
before designing drilling and completion operations to avoid possible wellbore stability problems
such as sloughing shale, stuck drill pipe, tensile fractures, and breakouts. Also, in the study, it is
mentioned that possible wellbore stability problems in producing wells such as sand production,
perforation instability, subsidence, mechanical damage and how these problems can be foreseen in
the design stage if rock strength is known. The correlation model between apparent cohesion and
UCS is developed using available data from 300 different rock samples. The results showed that a
simple correlation between rock apparent cohesion and UCS can be developed, and the correlation
can estimate the rock mechanical properties with a 10% average error.
The research study conducted by Meulenkamp and Alvarez (1999) compared the accuracy of
estimation of UCS values by utilizing regression techniques, aka empirical correlations, and Neural
Network. In the study, the data set contains records of 194 rock samples ranging from weak
sandstones to very strong granodiorites, and the Equotip hardness tester is used as an index test
forrockstrengthproperties. Thecomparisonwascompletedbetweenthreedifferentmethods: curve
fitting, multivariate regression, and NN algorithm. The coefficient of determination (R2) of NN
algorithm trained by this data set is determined as 0.967, while R2 measured 0.957 for Multivariate
Regression relation and 0.910 for curve fitting relation. The results indicated that 0.967 of actual
UCSvaluesstaywithintheregressionlinefitbyNN.EventhoughR2valuesforNNandMultivariate
Regression relation are similar, the results would possibly change if a more extensive data set is
used. Also, in the study, it is observed that the statistical relations underestimated high UCS
values and overestimated low UCS values as the statistical relations were based on the mean of all
predictions. Even though ML has limitations, the high accuracy of predictions indicates that it is
possible to develop algorithms and implement them for field applications. Furthermore, the study
indicated that ML algorithms reduce the cost and time to derive empirical correlations or conduct
destructive experiments.
Chang et al. (2006) reviewed and summarized the empirical equations that are derived to
estimatetheunconfinedcompressivestrengthandinternalfrictionangleofsedimentaryrock(shale,
limestone, dolomite, and sandstone) by using physical properties (such as porosity, velocity, and
modulus). The author describes the importance of deriving efficient empirical correlations by
pointing out the difficulty of retrieving core samples of overburden formations, where wellbore
instability problems generally occur. In the study, overall, 31 empirical equations are reviewed,
10
|
Colorado School of Mines
|
and it is observed that most of the equations are unique for certain data gathered from a specific
location, while some of the equations perform well. The empirical equations summarized for the
prediction of UCS in sandstone vary by input values such as interval transit time, aka P-wave
velocity (Fjær et al. 1992; McNally 1987; Moos et al. 1999), porosity (Vernik et al. 1993), and
modulus (Bradford et al. 1998). Also, the empirical equations to predict UCS values in shales from
porosity (Horshud 2001; Lashkaripour and Dusseault 1993), velocity (Horshud 2001; Lal 1999),
and modulus (Horshud 2001) are reviewed and listed in the study. An example of well-performing
equations is a relation between strength and porosity for sandstone and shale. The study also
emphasized that the velocity readings were from dry rock samples, which causes a lower estimated
value of UCS because of the inaccuracy introduced by the difference between dynamic and static
moduli. However, it is also noted that the empirical equations derived by using the laboratory
results are sufficient to estimate the lower boundary for UCS.
The study conducted by Yurdakul et al. (2011) compared the predictive models for UCS of
carbonate rocks from Schmidt Hardness. The Schmidt hammer that was initially designed to
measure the strength of concrete can also be used to predict rock strength. The test considers the
distance traveled by the energy transferred by the spring, and it measures the Schmidt hardness
value based on the percentage of initial extension of the spring. The study compares the prediction
results from the first-degree polynomial simple regression model and the artificial neural network
(ANN) based model. The data set for this study was collected from 37 different natural stones
collectedfrom19differentnaturalprocessingplantsfromdifferentcitiesinTurkey. Thecomparison
between models was made considering Variance account for (VAF), coefficient of determination
(R2), and root mean square error (RMSE). A lower value of RMSE indicates a more accurate
prediction, while a lower value of VAF indicates a less accurate prediction in the model. Also, R2
valueshouldbecloserto1ifthemodelperfectlyfitstheavailabledata. TheobtainedVAF,RMSE,
R2 indicators for the simple regression model are 12.45, 46.51, and 0.39, while 95.84, 7.92, and 0.96
for the ANN-based model. The results showed that the ANN-based model performs significantly
better than the simple regression model, and an updated model can be developed to predict UCS
values in sedimentary rocks.
TheperformanceofvariousMLmethodsthatcanpredictUCSiscomparedintheresearchcom-
pleted by Barzegar et al. (2016). The focus of the study is described as to evaluate the performance
11
|
Colorado School of Mines
|
of Adaptive neuro-fuzzy inference system (ANFIS), Mamdani fuzzy logic (MFL), Multi-layer per-
ception (MLP), Sugeno fuzzy logic (SFL), and support vector machine (SVM) for the prediction of
UCS of rocks in the Azarshahr area in northwest Iran. The fuzzy logic is described as an approach
to computational methods that consider the degree of truth rather than absolute truth, and this
allows the fuzzylogic toprovide arrays of possible true values. The multi-layer perceptionis acom-
mon ANN approach for the prediction models that include layers to process data and learn from
it. The adaptive neuro-fuzzy interference system is summarized as a feed-forward neural network
function to check for the best fuzzy decision rule. The support vector machine is a soft computing
learning algorithm mainly used for classification, pattern recognition, regression analysis, and pre-
diction. The data set for the study include P-wave velocity, porosity, Schmidt rebound hardness,
and UCS measured in the laboratory from 85 core samples. For the models, the data set is divided
into two subsets: training (80% of data) and testing (20% of data). The performance of the models
was assessed based on root mean square error (RMSE), mean absolute error (MAE), and coefficient
of determination (R2). The results indicated that SVM model outperformed the other models with
the lowest RMSE (2.14 MPa), MAE (1.351 MPa), and the highest R2 (0.9516).
Negara et al. (2017) introduced elemental spectroscopy to consider grain size effects on UCS.
The support-vector regression (SVR) is utilized to predict UCS. In this study, laboratory testing
is the primary method to collect UCS data. The X-ray fluorescence (XRF) analysis is used for
elemental spectroscopy. For the models, the data set was collected from the measurements of 35
core samples. The data gathered from seven of these core samples counted as outliers, and only
28 of them were used for the data set. The SVR is a supervised learning method that utilizes the
necessary algorithms to analyze and recognize patterns. The quantitative measures to evaluate the
performance of the model were the coefficient of determination and the mean absolute percentage
error. The results indicated that the model built with SVR could predict UCS with a small error
eventhoughasmallnumberofsampleswereusedtotrainthemodel. Also,aninfluenceofelemental
spectroscopyonUCSpredictionisobserved. Thisinfluenceisdescribedastheeffectofgraindensity
on rock strength.
12
|
Colorado School of Mines
|
2.2 Principal Component Analysis
Principal Component Analysis (PCA) is a multivariate statistical method that can effectively
reduce data dimensionality while preserving the variation within the data set. By preserving the
variation,thePCAallowsMachineLearning(ML)algorithmstobetrainedwiththesameorsimilar
patterns in the data set, which is essential for building a robust ML model. PCA is defined by
Gupta et al. (2016) as a dimensionality reduction technique that uses an orthogonal transformation
to convert a set of observations of possibly correlated or dependent variables into a set of linearly
uncorrelated variables, which are called Principal Components.
Kong et al. (2017) reported that PCA was first published by Pearson (1901) and developed by
Hotelling (1933), and the modern applications and concepts are formed by Jolliffe (2002). PCA
wasusedtoconductstudiesrelatedtohistorymatching,seismicinterpretation,patternrecognition,
reservoir uncertainty evaluation, data compression, image processing, and high-resolution spectral
analysis (Iferobia et al. 2020).
Kong et al. (2017) explained feature extraction as a process of extraction measurements that
are invariant and insensitive to the variations within each subspace of the data batch. The feature
extractionisanessentialstepofthetaskofpatternrecognitionandthecompressionofdatabecause
bothtasksrequirethesmallestpossibledistortionondatawhilereducingthenumberofcomponents.
Also, feature extraction is a data processing technique that outlines a high-dimensional space to a
low-dimensional space with minimal information loss. Principal component analysis (PCA) is one
of the widely known feature extraction methods, while independent component analysis (ICA) and
minor component analysis (MCA) are variants of the PCA. ICA is usually applied for blind signal
separation, and MCA is commonly used for total least square problems (Kong et al. 2017). The
scope of this study will be limited to the PCA, and the analysis will be conducted by using Python
(Van Rossum and Drake 2009). However, comprehensive information regarding PCA is provided
in Chapter 3 to clarify the concepts.
PCA seems a complicated and time-consuming method once described in mathematical terms,
but with the increase in computational power in the last decade, now it is possible to apply PCA to
a million data points in less than a minute. With that, the applications of PCA started to become
moreandmorecommonoverthelastdecadeintheindustry. TherecentapplicationsofPCAinclude
13
|
Colorado School of Mines
|
the spectral decomposition of seismic data, noise reduction of gamma-ray spectrometry maps,
predicting possible casing failures, identifying the possible correlations between elemental data,
estimation of dominant water channel development in oil wells, and estimation of geomechanical
properties in unconventional reservoirs.
Guo et al. (2009) utilized PCA to conduct a spectral decomposition of seismic data technique
recently introduced to use as an interpretation tool that can help identify hydrocarbons. The
merit of this interpretation technique is to develop an adequate form of data representation and
reduction because, typically, the interpreter might generate 80 or more spectral amplitude and
phase components. In the study conducted by Guo et al. (2009), 86 spectral components ranging
from 5 Hz through 90 Hz were generated using an interpreter, and PCA was utilized to reduce the
number of spectral components. It is observed that only three principal components in the total of
86 components were able to capture 83% of the variation in the seismic data. The results indicated
that flow channel delineation could be mapped using RGB (Red, Green, and Blue) colors stack for
the three largest principal components.
de lima and Marfurt (2018) conducted a study with a similar motivation as Guo et al. (2009).
In this study, PCA was used to reduce the noise of the gamma-ray spectrometry maps and reduce
thenumberofcomponentsinthedatasetfromfourtothree. Initially, thegamma-rayspectrometer
data consists of TC, K, eTH, and eU. The map displayed after implementing PCA and K-means
clustering on PC1 and PC2 indicated a better correlation with the traditional geological map
compared to the map created by only clustering of TC, K, eTH, and eU without PCA.
Song and Zhou (2019) conducted a study to predict possible casing failures using PCA and
gradientboostingdecisiontreealgorithms(GBDT).Thegradientboostingdecisiontreealgorithmis
amachinelearningalgorithmthatutilizesdecisiontreesandcombinestheoutputofweakandstrong
decision trees, aka boosting, to create a robust learning algorithm. This study applies the proposed
method to the data set obtained from an oil field in mid-east China. The data set was created
based on the parameters affecting the casing failure. Some of these parameters were the outside
diameter of the casing, the thickness of perforation, and casing wall thickness. The PCA is used to
reducedatasetdimensionality,whileGBDTisutilizedtodevelopthemachinelearningclassification
model. The results indicated that using PCA with GBDT increased prediction accuracy on casing
failure compared to classic methods (decision tree, Na¨ıve Bayes, Logistic Regression, multilayer
14
|
Colorado School of Mines
|
perceptron classifier). Also, it is stated that the algorithm created by using PCA and GBDT can
successfully predict a timeline for preventive maintenance on offset wells (Song and Zhou 2019).
EventhoughthePCAiscommonlyusedtoreducedimensionality,akathenumberofvariablesin
thedataset,PCAhasotherpracticalapplications. ThePCAcanbeutilizedtoidentifyacorrelation
between the components within the data. The study conducted byElghonimy and Sonnenberg
(2021) focuses on observing a correlation between major and trace elements within the elemental
data obtained from the Niobrara Formation in the Denver Basin. In the study, elemental data of
samples is measured using a handheld XRF analyzer on full core from the Niobrara Formation.
The variability of elemental concentrations is analyzed using PCA, and it is compared with the
core facies to display the history of deposition and the conditions through the deposition process.
The results showed that the application of PCA on the data set created by the integration of
XRF measurements and core facies indications made a clear display of these elements in five major
categories. Also,itisstatedthattheseidentifiedmajorcategoriescanactasanintermediaryforthe
different deposited elements to indicate the history of deposition within the Niobrara Formation.
Chen et al. (2019) studied a possible application of PCA as a recognition method for the
dominant water channel development in oil-producing wells. The study was conducted using SZ
Oilfield’s data located in Liaodong of Bohai bay. An evaluation index system was created to build
a comprehensive evaluation method to consider every parameter. The parameters grouped in two
main categories, dynamic response parameters and the parameters causing the channel to advance.
The parameters considered for the evaluation index system are as follows:
• Dynamic response parameters.
– Dimensionless pressure index,
– Pressure index,
– Average water cut,
– Water absorption profile coefficient,
– Apparent water injectivity index increase,
– Water injection intensity increase.
• Parameter causing the channel to advance.
15
|
Colorado School of Mines
|
– Total water injection volume/unit thickness,
– Apparent water injection intensity,
– Viscosity of crude oil,
– Effective thickness of reservoir,
– Permeability contrast,
– Average permeability.
In this study, the focus of utilizing PCA is to test the objectivity of the method since the
increasing number of parameters induces subjectivity to the recognition algorithm. An evaluation
index system is created to analyze the causes of the dominant channel, and based on this system,
an artificial learning method that recognizes the dominant channel is developed using PCA. The
decision system was created based on the comprehensive evaluation index of the well group. If the
calculated comprehensive evaluation index of the well group was higher than the average value,
the well group was assumed to be developing a dominant channel. The results showed that the
application of PCA to compute comprehensive evaluation index values reduced the subjectivity
introduced by a large number of parameters. Also, it is stated that the method can provide
technical support for further enhancing oil recovery by recognizing a pattern of dominant channel
development in producing wells.
Furthermore, another study conducted by Iferobia et al. (2020) shows that the significant num-
ber of drilling operations conducted in unconventional reservoirs has shown that the prediction of
UCS in the shale reservoirs is essential due to its complex and non-linear behavior. However, the
modelswithasingleloginputparameter(sonic)wereinsufficientfortheseformations(Iferobiaetal.
2020). In the study, 21,708 data points of acoustic parameters are used to create a model with
a principal component – multivariate regression, and the results indicated that the model could
predict UCS values with 99% accuracy.
Previous studies conducted on possible wellbore stability problems induced by a lack of knowl-
edge of geomechanical properties indicated that the geomechanical modeling for drilling and com-
pletion operations is open for improvement. Furthermore, the previous works conducted on the
correlation between drillability and UCS show that the estimation of UCS is essential to increase
16
|
Colorado School of Mines
|
the overall drilling performance. ML algorithms are powerful tools for predicting UCS on forma-
tions with complex and non-linear behavior. Recent studies show that data-driven solutions are
reliable resources to support the decision-making process for drilling and completion operations.
ML can be a powerful tool to estimate geomechanical parameters such as UCS in formations with
complex and non-linear behavior. The literature review shows that implementing these methods
can vastly increase overall drilling performance and efficiency of completion operations.
2.3 Tree-Based Algorithms
Tree-based algorithms are supervised learning methods that are considered to be the most
efficient machine learning method. Tree-based algorithms can be used for both regression and clas-
sification problems. Also, they are suitable for non-linear relationships. The tree-based algorithms
are simple yet powerful learning methods. The most popular tree-based learning algorithms are
decision trees, gradient boosting, and random forest. Tree-based methods create and partition the
feature space into a set of subspaces and fit a simple model (or constant) in each space. By using
these feature spaces, a decision for each entry is given based on conditions set on each node of every
tree.
Thedecisiontreeisasupervisedlearningalgorithmwithadefinedtargetavailable. Thedecision
tree is commonly used for classification problems. There are two types of decision trees: categorical
variable decision trees and continuous variable decision trees. Categorical decision trees are built to
solve classification problems, while continuous variable decision trees are commonly used to solve
regression problems. Similarly, for both decision trees, the algorithm splits the data set into two
or more homogenous subsets regarding the highest number of splitters or differentiators in input
values. The decision tree algorithms are popular as they are easy to understand and useful for
data exploration, but their tendency to overfit the data set is the most common difficulty of these
methods.
Regressionandclassificationtreesaresimplydecisiontreeswithmorenodes(orleaves),splitting
the data set into smaller subsets. As it is mentioned before, the main differences between them
are the type of input values and objective set while training the algorithm. While training, the
splittingprocesscreatesafullygrowntreeforbothcases, andthesplitprocesscancauseoverfitting
as the information given to each tree will be similar. The model parameters can be adjusted to
17
|
Colorado School of Mines
|
avoid overfitting, and validation techniques like pruning can be applied. The constraints of the tree
size are simply the model parameters such as minimum samples for each node, minimum samples
for each terminal node, maximum features to consider for split, and the maximum depth of the
tree. The pruning essentially prevents the model from being greedy in the decision process. While
pruning, the tree is grown to a large depth, and nodes (or leaves) that give negative values as
output are removed.
Another advantage of tree-based algorithms is that they are suitable for ensemble learning
methods. Ensemble learning is developing a group of predictive models to improve model stability
and accuracy, and they are simply a boost for decision tree models. A well-developed model should
maintain the balance between bias and variance error. This balance between two errors is known
as the trade-off management of bias-variance errors. This trade-off between variance and bias can
be optimized in decision tree models by applying ensemble learning methods. The most common
ensemble learning methods are bagging, random forest, and boosting. The bagging and random
forestarethemostcommonmethodsappliedforclassificationproblemswhereagroup(committee)
of trees cast a vote for the prediction. Boosting also involves a committee of trees that cast a vote
for the prediction, but unlike random forest, a group of weak learners can evolve, and committee
members cast a weighted vote. Bagging also utilizes multiple classifiers and combines them to
develop a classifier to reduce the variance of predictions. These ensemble methods simply develop
a group of base learners and combine them to establish a strong composite predictor. Even though
there are many ensemble learning methods, Random Forest (RF) is decided to be the most suitable
algorithm for this study, and it will be used to build a regression model to estimate UCS from
drilling parameters.
2.3.1 Random Forest
Random forest is an ensemble learning method that includes multiple classifiers and uses a
combination of classifiers. Random forest is technically a modification of the bagging method that
utilizes many decorrelated trees and retains the average of predictions as an output. Random
Forest (RF) is similar to Bosting in terms of performance, but tuning the hyperparameters to avoid
overfitting makes RF the best option for this study.
The algorithm of RF for regression or classification is as follows,
18
|
Colorado School of Mines
|
• Assume z=1 to Z:
• Then a bootstrap sample A* of size N is drawn from the training data.
• RF tree T is grown around the bootstrapped data by following the steps below for each
z
terminal node of the tree until the predetermined node size is achieved.
– n number of variables are selected from the p variables.
– The best variable point among the n is picked.
– The node is split into two daughter nodes.
• The output of ensemble tress comes as {T }Z
z 1
The prediction at a new point x;
Z
1 (cid:88)
Regression : fZ(x) = T (x) (2.4)
rf Z Z
z=1
Classification : If the class prediction of thezth random forest tree assumed as C (x)
b
(2.5)
ThenCZ(x) = majority vote{C (x)}Z(x) (2.6)
rf Z 1
The process of bagging trees creates identically distributed variance within the predictions, which
means that the bias of a single tree (bootstrap) is the same as the bagged trees. Therefore, the only
hope of improving prediction accuracy is to reduce the variance. However, RF adaptively develops
the trees to remove bias as the distribution between trees is not identical.
IfthevarianceisexplainedbythetermsoftheRFalgorithmdescribedabove. Then,theaverage
variance of Z independent identically distributed random variables, each with σ2 variance, is 1σ2.
z
If it is assumed that the variables are identically distributed but not necessarily independent from
each other, with a positive correlation ρ , the average variance is
1−ρ
ρσ2+ σ2 (2.7)
Z
The second term of the average variance disappears with the increasing number of random
variables (Z). Hence the averaging greatly helps to reduce the variance. The focus of RF is to
19
|
Colorado School of Mines
|
reduce the variance of bagged trees by reducing the correlation between them without increasing
variance by a high margin.
RF is one of the most common ML methods that help solving to both regression and classifica-
tion problems. Also, the studies reviewed indicated that the common problems of the oil and gas
industry, specifically the drilling industry, can took the opportunity to create models with RF that
can predict essential parameters or indicate the drilling incidents.
Hegde et al. (2015) conducted a study to oversee the possible applications of statistical learning
techniques to predict the Rate of Penetration (ROP) values. The statistical learning methods
were trees, bagged trees, and Random Forest (RF), and the models are evaluated based on their
performance by comparing Root Mean Square Error (RMSE) values. Also, this study introduces
a predictive model called Wider Windows Statistical Learning Model (WWSLM), which considers
manyinputparameterssuchasWOB,RPM,anddepthtocompensatefortheeffectofhighlithology
variation on drilling parameters by utilizing trees and random forest. Another advantage of the
model built by Hegde et al. (2015) is that the blind test subset is split from the complete data
set and ensured that the model never learns from it, which reduces the probability of overfitting.
Only surface drilling parameters are included in the training, validation, and test sets as input
data. The tree method is described as a technique used for classification and regression purposes.
Baggingissummarizedasjointlyusingbootstrappinganddecisiontreestocompensateforthehigh
lithology variation. Bootstrapping is a method to reduce variation by repeatedly sampling from
the same data set that yields multiple training sets. Random forest is described as a method that
utilizes bootstrapping to increase the number of training subsets and combines it with decision
trees. The main difference between RF and bagging is that RF uses a random subset of predictors
to build trees, which eventually reduces the variance of Statistical Learning Model (SLM). The
results indicated that RF provides the most accurate ROP predictions with RMSE of 7.4, which is
three times lower than RMSE values of other models. It should be noted that the ROP values of
the training data set ranging from 20 to 120 ft/hour.
The optimization of drilling parameters by using Random Forest (RF) regression algorithm
is studied by Nasir and Rickabaugh (2018). The study aimed to optimize drilling parameters to
increase bit life by reducing the wear and tear while maximizing ROP and minimizing Mechanical
Specific Energy (MSE). In the study, drilling parameters from the wells within 20 miles radius are
20
|
Colorado School of Mines
|
used as a training data set. Also, it should be noted that the drilling parameters were collected
while using the same motor and bit for the entire vertical interval of these wells. The key drilling
parameters investigated were surface RPM, mudflow rate, WOB, and formation type. A total of
six different formations are encountered while drilling these wells. The models were trained to
estimate ROP using Linear regression, Support Vector Regression (SVR), Random Forest, and
Boosted Tree. The data is split into two categories, the training set (80% of the data set) and the
validation set (20%). Performance indicators for the model were root mean square error (RMSE),
mean absolute error (MAE) and mean average error percentage (MAEP). The results showed that
RF could estimate the ROP values with 12% error after tuning the hyperparameters, while the
other models had a higher percentage of errors. Also, the author stated that the model could be
possiblyimprovedbyintroducingkeyvariablesimpactingdrillingperformance, suchascompressive
strength and gamma-ray response.
AlSaihati et al. (2021) studied the possible application of Random Forest (RF) to predict the
anomalies in Torque and Drag (T&D) values to indicate a possible drilling incident such as a
stuck pipe in advance. While building RF model, a pipe stuck case that took place while drilling
a 5-7/8-inch horizontal section of 15,060 ft depth well is considered, and the model was built to
indicate possible problems in this interval. It is assumed that the stuck pipe occurred while drilling
the reservoir contact zone (5000 ft) because of high T&D, insufficient transfer of weight-on-bit
(WOB), and poor hole cleaning. The pipe stuck occurred at a measured depth of 14,935-ft while
tripping out after circulating the hole clean. The drilling parameters used as a variable to train
RF model include hook load, flow rate, rotation speed, rate of penetration, standpipe pressure,
and torque readings at the surface, and the data covers the timeline starting from the beginning
of the horizontal section to one day prior to the stuck pipe incident. The number of data points
used to train the RF algorithm is 7,186, 80% of the total data, and 1,797 data points used to test
the algorithm, which refers to 20% of the total data. The MSE value of 0.06 and R value of 0.97
indicated that the RF algorithm could predict the anomalies accurately. Also, it is stated that the
model detected anomalies for nine hours consecutively prior to the pipe stuck incident.
Overall, the reviewed studies indicated that the performance of RF machine learning algorithm
could be a robust tool, and its implementation as a solution for common problems is promising.
Also, the studies indicated that RF is one of the most common algorithms to build efficient data-
21
|
Colorado School of Mines
|
CHAPTER 3
BACKGROUND INFORMATION ABOUT DATASET AND PCA
Data collection and processing are essential for every project studying possible data-driven
solutions for the common problems. In this research, the drilling data used to train a regression
model to estimate UCS is collected for another research project with similar intents (Joshi 2021).
In this chapter, the background information of the data set is provided. Also, the unique nature of
this data set is discussed in detail. In addition, the fundamental principles of PCA are discussed
in this chapter. Then, the implementation of PCA to indicate feature importance and explained
variance by principal components are given in detail with the results.
3.1 Background Information about the Dataset
As mentioned before, the data set needs to meet certain criteria to be used to train a machine
learningalgorithm. Inotherwords, theaccuracy, consistency, uniquenessofdata, andcompleteness
and validity of the data set should meet certain criteria. The possible problems in a data set can be
duplicated, outlier, missingdatapoints, andinaccuratedatareadings. Initially, thedatasetusedin
thisresearchwascollectedbyDr. Joshi,anditwascollectedwhiledrillingthroughsamplesprepared
in laboratory conditions. While the analog samples were prepared to mimic field conditions while
drilling, cryogenic samples were prepared to mimic various extraterrestrial conditions in Lunar.
Joshi (2021) collected the drilling data using a laboratory experiment setup. The properties of
these analog and cryogenic samples are given in Table 3.1 and Table 3.2, respectively. The initial
planwastousethecompleterawdatasetcollectedfromeachofthesesamples, butunfortunately, a
drilling data collected only from analog samples could be utilized to train the RF regression model
for this study. A discussion regarding the uniqueness and challenges of the data set is provided
later in this chapter. The final form of these cellular concrete samples, aka analog samples, before
and after drilling while curing and demolding is presented in Figure 3.1.
The setup is built on a frame with a 3-phase AC motor to transfer power to the masonry
bit. The movement of the masonry bit is supplied by the stepper motor and guide rails. The
experimental setup was controlled by a variable frequency drive and stepper motor. The picture of
23
|
Colorado School of Mines
|
3.1.2 Weight on Bit (WOB)
Weight on bit calculations was based on the subtraction of axial force measurement on air for
each bit from axial force measurements while drilling. The reference axial force of each bit on-air
is measured and hardcoded to the model to automate the WOB calculation process (Joshi 2021).
The WOB at any point is calculated as:
WOB = Total axial force −Total axial force (3.2)
i i air
3.1.3 Rate of Penetration (ROP)
The rate of penetration is calculated by the difference in measured depth at each time interval.
The ROP can be calculated by the following equation.
Drilling Depth −Drilling Depth
i i−1
ROP = (3.3)
i
time −time
i i−1
3.1.4 Normalized Field Penetration Index (N-FPI)
The field penetration index was originally defined to evaluate the energy required to overcome
therockstrengthforatunnelboringmachine(TarkoyandMarconi1991),(Hassanpouretal.2011).
By the original definition, the FPI is calculated as:
kN F
FPI = (cutter) = n (3.4)
mm P
rev
Where, F cutter load (Cutter Load (kN)/Number of Cutters) and P is the penetration of cutter
n
per revolution (ROP/RPM).
The cutter load (F ) can be replaced by the normalized drilling force (WOB) to calculate
n
Normalized Field Penetration Index (N-FPI) for the drilling systems. The cutter load F for
n
drilling systems can be calculated as:
N WOB
F = ( ) = (3.5)
n mm2 (π)d2
4
The final unit of N-FPI will be ( N )/(mm)
mm2 rev
26
|
Colorado School of Mines
|
3.1.5 Mechanical Specific Energy (MSE)
Teale (1965) defined the Mechanical Specific Energy (MSE) as the amount of energy required
to excavate a unit volume of rock. MSE can be calculated as:
MJ WOB(N) 103Torque(N.m)×RPM
MSE = ( ) = + (3.6)
m3 (π)d2(mm2) (π)d2(mm2)×ROP(mm/min)
4 4
3.2 Uniqueness with Dataset and Challenges
Every data set collected includes noise and outliers that need to be removed before using for
any machine learning or deep learning application. The process of filtering these noise and outliers
from raw data before labeling is called processing. The challenges introduced by raw data can
vary, and its uniqueness is the key to building a robust machine learning model. Likewise, the data
set used in this study needed processing before being used as training data for machine learning
models, but the challenges were not limited to only noise. Initially, torque values were planned
to be measured using a bridge-based shaft-to-shaft torque sensor placed between shaft and bit,
but torque readings were incredibly noisy due to electromagnetic interference, mechanical noise,
and ambient noise (Joshi 2021). The electromagnetic interference caused by the three-phase AC
motor is located extremely close to the torque sensor. The vibrations in the bit while drilling result
in mechanical noise in torque readings, while ambient noise from The Earth Mechanics Institute
(EMI) at the Colorado School of Mines campus was detected by torque sensors. Filtering the
noise introduced by the environment and experimental setup was removed in a sampled data, but
this process required a vast amount of time as signal smoothing, outlier removal, and significant
signal filtering needed to be applied to the complete data set. After consideration, Joshi (2021)
decided to build a regression model to predict torque values from other drilling parameters and
Variable Frequency Drive (VFD) outputs, as the filtering process was computationally expensive
andrequiredsignificantsignalconditioning. Toclarifyhowtorquedataiscalculated,anarchitecture
ofthealgorithmcalled“TheLunarMaterialCharacterizationwhileDrillingAlgorithm”ispresented
in Figure 3.3.
27
|
Colorado School of Mines
|
Figure3.3: ThearchitectureofalgorithmbuiltbyJoshi(2021)©2021byDeepR.Joshi.,reprinted
with permission from Deep R. Joshi
As it is described in Figure 3.3, a regression model used to calculate Torque values from VFD
output and other drilling parameters right after raw data is classified as drilling and non-drilling
data. Later, Torquevalueswereassumedtobesimilartotorquesensorreadings. Anotherchallenge
introduced by this dataset is the lack of variation within UCS.
Even though the complete data set contains approximately 55+ observations on eight different
variables, only seven different UCS values are present. Initially, this was assumed to be a unique
part of the data set. Later, it was understood that the lack of variability in target values while
building RF regression model was causing significant overfitting and leading to high prediction
accuracy while estimating UCS. A histogram graph of UCS within the complete data set is given in
Figure 3.4. This issue and its impacts on this study are discussed in detail in Chapter 4. Initially,
in the scope of this study, these predicted torque values were assumed as a unique part of the
data set. However, the prediction of torque values possibly reduced the variation within the data
set and impacted this study significantly with the contribution of other data set challenges. This
combination led to changes in the path of this study.
3.3 Principal Component Analysis
Brief information about Principal Component Analysis (PCA) is given in Chapter 2. PCA is
defined as a dimensionality reduction method that utilizes orthogonal transformation to convert
28
|
Colorado School of Mines
|
Figure 3.4: Histogram of UCS - Complete Dataset
a set of observations into a set of linearly uncorrelated variables (Gupta et al. 2016). As is de-
scribed comprehensively in Chapter 2, PCA is commonly used to build data-driven solutions for
the various problem caused by dimensionality. This section explains the fundamental principles
and implementation of PCA to clarify the concept. In this study, the scope of PCA will be limited
to feature importance indication and explained variance analysis in the data set.
3.3.1 The Concept of PCA
Kong et al. (2017) stated that “the principal components (PC) are the directions in which
the data have the largest variances and capture most of the information contents of data.”. The
PCs are correlated with the eigenvectors inherent in the largest eigenvalues of the autocorrelation
matrix of the data vectors. The expression of data vectors regarding the PCs is named PCA, while
expressing the vectors regarding the MCs is named MCA.
PCA or MCA is usually one-dimensional, but the actual applications have also been found to
be multiple dimensional. The principal (or minor) components are referred to as the eigenvectors
affiliated with r largest (or smallest) eigenvalues of the autocorrelation matrix of the data vectors,
while r is the number of the principal (minor) components. The subspace covered by PC is called
29
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.