University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Chalmers University of Technology | 28 Chapter 6: Result
order to make sure that no radiation leaks out from the gap between the metal sheets, the
metal sheets were designed to overlap in the corners of the cage. The open spaces for the
cable inlet and outlet contains the light trap, the metal part that forces the radiation beam to
bounce a couple of times, because the radiation reduces every time the beam bounces in the
light trap. Figure 6.2 illustrates a rendering of the inner cage.
Figure 6.2: Inner cage
6.1.3 Cabinet
The Cabinet is the separate partition for the sub-systems which includes the electronic storage
cabinet, high-voltage cabinet, water tank cabinet and computer cabinet. Each cabinet was
designed to open from outside and cannot accesses to the analysis components. The cabinet
doors have metal rims and flanges to prevent the water from slipping through the gaps
between the cabinet and the door. The electronic storage, high-voltage and water tank
cabinets are located at the back of the machine. To simplify the design and save production
cost, instead of having a cage to install the components inside, the cabinets have only a back
plate attached with the sub-system parts and the side wall to make partition within the
cabinet. A concern was raised during the CAD modeling that because the electronic storage
cabinet door opens very wide, the cabinet will take a lot of space at the back of the
instrument. The best solution was to have two smaller cabinet doors. The computer cabinet is
placed on the front side. The touch screen fastens onto the front plate and the cabinet door
opens from the side of the machine. Figure 6.3 displays a rendering of the sub-system
cabinets.
Figure 6.3: Sub-system cabinets |
Chalmers University of Technology | Chapter 6: Result 29
6.1.4 Analysis equipment
The analysis equipment moves together in a left and right direction when the equipment scans
the sample. Because the X-ray equipment is very fragile and valuable, extreme caution must be
taken to prevent the X-ray equipment from colliding with other components. The main
concern about the equipment bracket was the positioning. The flexibility to adjust the position
is very important because it might happen that the user needs to adjust one of the
components in order to get the most effective results. The brackets were designed to have
slotted holes to allow for easy extensive adjustments. Figure 6.4 shows a rendering of the
equipment bracket with slotted holes.
Figure 6.4: Equipment bracket
6.1.5 Feeding components
The main function of the feeding components is to load and unload the core box. As
mentioned in chapter 4.4.1, the movement of the core box is along the Y-axis direction. The
tray moves in and out through the feeding hatch.
6.1.6 Front hatch
The front hatch is used for performing maintenance functions inside the inner cage such as
cleaning the X-ray tube, changing the X-ray equipment and regular maintenance for the
camera and laser sensor. The hatch is made with an aluminum frame and special lead glass,
which makes it possible to look inside the machine. As mentioned in chapter 5.2.1 the open
mechanism of the hatch was changed to a normal swing door. The hatch is kept shut by a
magnetic lock which is activated during the analysis process and a key lock that allows only the
authorized person to open the hatch. Figure 6.5 shows a rendering of the front hatch.
Figure 6.5: Front hatch |
Chalmers University of Technology | 30 Chapter 6: Result
6.1.7 Feeding hatch
The feeding hatch is the most frequently opened hatch because it used for loading and
unloading the samples. The radiation safety was the primary concern when designing this
component. It is important to make sure that when the hatch is closed the inner wall and the
hatch door are overlapping. The hatch is a swing door which opens downwards. Figure 6.6
shows a rendering of the feeding hatch.
Figure 6.6: Feeding hatch
6.1.8 Outer plate
The design of the outer plate was modified to match the refinement of the front hatch and
feeding hatch. For example, the front plate was changed so that instead of having mounts with
a slanted front hatch the plate just mounts with a horizontal front hatch. The warning lights
were placed on the corners of the triangular profiles instead of being mounted on the wall.
Finally, a modification of the roof was required because the heat exchanger will be positioned
on top of the machine. The roof was intended to prevent the water from getting inside the
machine that why it is sloped at a slight angle to make the water flow down from the roof.
Figure 6.7 illustrates a rendering of the instrument.
Figure 6.7: The instrument |
Chalmers University of Technology | Chapter 6: Result 31
6.2 Limitations
Because of the time limit and the lack of information at the time, the design and CAD model of
the instrument could not be fully completed. With the intention of reducing the uncertainty as
much as possible, a list of limitations was made to indicate the tasks that could not be done or
need to be investigated for more information. Below is the limitation list.
The roll up door that covers the front hatch and touch screen cannot be placed inside
the machine as per the intended design. Because of the space limit.
The CAD model of the electrical components could not be completed because the
time limit and the list of electrical components were not finished either. However, the
estimation of the components and initial sketches were created to confirm that all the
electrical components can fit inside the electrical cabinet.
The cooling system could not be finalized because new information from the supplier
which had not been accounted for. An investigation needs to be done to explore
alternative solutions.
Some small detail could not be decided on until the selecting of the parts was done.
For example, the fastening hole for the handle, the key lock and door catch on
cabinet, the front hatch and feeding hatch could not be made because they were not
selected at the time. |
Chalmers University of Technology | 32 Chapter 7: Discussion
7. Discussion
This chapter will explain the discussions of both the thesis process and theoretical framework
and the result. In the thesis process and theoretical framework, reflections on the
development process and method used will be presented. The discussion about the fulfillment
of specifications and final design will be discussed in the result.
7.1 Thesis process and Theoretical framework
The product development procedure outlined in this thesis follows the design methodologies
including; establishing the requirements specification, identifying and expressing the functions,
generating solution proposals and organizing alternatives, synthesis of concepts, evaluating
with respect to selection criteria and refined selected processes. This approach was used as a
guideline and timeframe for the thesis. The time plan of the process was divided into four
phases to ensure that the thesis works in good progress and on the right track as it proceeds
from stage to stage. In order to move on to the next phase, the results should have met and
fulfill the requirement of the previous stage. The deliverable of the first stage is the list of
requirements. The second stage requires the possible product concept. Finally, the
requirement of the third stage is the potential concept. In summary, the design methodology
provides a very useful approach for the development process.
The methods were selected and used as tools to facilitate meeting the goal of each process.
The common question that arose during the process was which methods are suitable for the
development of the instrument. For the function analysis process the flow model and function
mean tree were chosen. Even though both methods have the same primary purpose which is
to decompose the main function into sub function, the process flow model was used to show
the flow of the input operand function and output operand. The function mean tree was used
to map the functional domain and physical domain.
The use of the concept table was developed for this specific thesis. The intention of the use of
method was to collect the solutions in one table then use the concept evaluation methods to
evaluate each solution. The reason behind this is because the functions of the instrument have
individual properties. The best way to simplify the evaluation process is to select the solutions
from each function.
The choices of evaluation methods are also a discussion issue, for instance the evaluations
based on Go/No-Go was applied in the concept screening process. The reason for using the
Go/No-Go approach was because the stakeholders could be involved in the evaluation process
and the discussions of pros and cons. This method is the fast and easy way to narrow down the
concepts
7.2 Result
The result of the thesis is the CAD model and drawing of the instrument. Due to limited time
and resources the final concept was modified to create the final design that is produce able
and to reduce the manufacturing cost. The feeding hatch was one of the concerns during the
development process. The first idea was to open it as a lift-up door with the opening
mechanisms either motorized or counterweight. After consulting with several suppliers, it
turned out that the concept was not feasible because the hatch is too small to install the |
Chalmers University of Technology | Chapter 7: Discussion 33
mechanism. In the end, the hatch was simplified and used as a swing door that opens
downward.
Regarding the feasibility and the cost issue, the final design was presented to the
manufacturing workshop that produced the first prototype. The meeting with the
manufacturing workshop gave very useful information regarding the manufacturing. For
instance, in the CAD model all the sheet metal can be assembled by using and indicating the
accurate dimension. However for practical reasons the sheet metal plate cannot be used
without a measure of tolerance. Hence, the design should consider the tolerances between
two metal sheets. In the corner, the gap between the folded plate and the other plate should
be at least equal to the thickness of the metal sheet plus 1 mm. and the tolerance between
two plates in the normal assembly part should be 1mm. Finally the hole of the outer plate
should be bigger than the hole of the inner plate, for instant if the inner plate uses M6 outer
plate should use M8.
In summary, the result of the thesis has fulfilled the specification in a big extent. Even though
there are some differences between the expect results and the final spec. These happen
because of the requirements have been changed and the problem that discover during the
development process. For example, as mention before the outcome concept of feeding hatch
and front hatch will increase the complexity of the machine. The decision was to use a swing
door instead.
Following, are the lessons learned during the development of this thesis.
The requirements can change.
During the thesis several requirements have been improved and changed. This is the
uncertainty that cannot be avoided. Allowance and preparation should be made for this
one needs to be aware of how these changes would affect future processes.
The time plan is important.
Sometimes the developer can lose sight of the purpose and focus on the wrong thing.
The time plan is a good pacemaker and ideal tool for keeping all concerned on the right
track. The deliverables of each stage can help clarify the aim of each phase.
The best solution may be unfeasible.
Even though one of the criteria is manufacturing ability, the evaluation could be based
on the judgment of stakeholders who lack the expertise in that area. To ensure that the
design is feasible, consultation with the experts is required.
There is no perfect design in real world.
In the computer-aided design program, the components can be created and
assembled easily, but in reality the assembly process is more complicated. The design
should consider practicalities such as the tolerance as well. |
Chalmers University of Technology | 34 Chapter 8: Recommendation
8. Recommendation
The developed product is the new developed equipment for a core box scanner. The first
model was simplified due to time limitation and resources. For further development, there are
many opportunities for improvement. This chapter will explain the recommendations and
actions that could be taken.
First of all, some of the functions from the final concept were not applied in the final design.
For example, for the transportation solution, the idea of adding corner bumpers, rolls up door
and forklift pockets were proposed during the development process. The stakeholders also
agreed with these concept solutions. Unfortunately the practical solution could not be
identified within the thesis due to the time constraints. The suggestions are that a study of the
method and material that would be suitable for those functions should be made.
The electrical components and electronic control system should be developed. The sketch of
the electrical components that will be installed in the cabinet was made during the
developmental process. But it could not be completed because the number of components
could not be identified at that time. The sketch was based on the estimation of the number of
components. Further action should be to identify the components and design the electronic
control system by using the sketch as a guideline.
Safety from radiation is the most important issue. The recommendation is that the machine
should be tested and measured for possible radiation leakage using a reliable method. The
result should not exceed the limitations in the SSMFS 2008:25 Swedish Radiation Safety
Authorities rules and general advice regarding radiography, and SSMFS 2008:40 Swedish
Radiation Safety Authorities rules regarding usage of industrial equipment that contain closed
radiation sources and X-ray tubes.
For further development, the instrument should be able to scan difference sizes of the core
boxes. The study indicates that there are differences in the standard core box in various
countries. For instance, the specification on the standards that are used in Sweden, Norway
and Denmark are:
Sweden: 455 mm wide x 1050 mm long.
Norway: 343 mm wide x 1100 mm long.
Denmark: 400 mm wide x 1050 mm long.
The trend is that the core boxes are getting narrower because of the weight issue. The
developed instrument has the ability to analyze a core box of length 1050 mm. For the next
model the instrument should be able to analyze a core box 1100 mm long. |
Chalmers University of Technology | Chapter 9: Conclusion 35
9. Conclusion
The purpose of the thesis was to generate a production ready concept of an X-ray instrument
for analysis of rock samples for field use. The thesis has fulfilled its purpose. It has improved
the function of the existing prototype by making it faster and easier for the user to scan core
samples. The final concept includes the solution of a loading and unloading the core box, the
movement of the X-ray equipment, transport solution, maintenance, control environment,
layout, safety and outer design.
Product development methods and tools have been used during the thesis work. The Function
analysis including the Process flow model and the Function-means tree, the Concept table,
Concept evaluation - the Elimination matrix, the evaluation based on Go/No-Go screening and
the Kesselring matrix, the Failure mode and effects analysis and Prototyping, were intensively
applied and greatly contributed to the success of this thesis.
In conclusion, the developed instrument can analyze multiple samples in a standard core box.
It is possible to easily transport the machine to the mine or exploration site. Regarding the
maintenance, the machine was designed to ensure that the users can maintain the equipment
easily. It can withstand the rough and tough environment. The layout of the machine was
designed to be as compact as possible. Regarding safety, the concept was generated with
respect to the Swedish Radiation Safety Authorities rules. Finally the outer design was
proposed by the product designer.
The final design of the developed instrument includes profile structure, inner cage, sub-system
cabinet, analysis equipment, feeding components, front hatch, feeding hatch and outer plate.
The prototype was created by a computer-aided design program called Autodesk inventor. The
CAD model of the instrument was presented to the manufacturing workshop to ensure that
the design is possible to produce. For the sub-systems, consultation and advice was sought
from an expert in the specific area. |
Chalmers University of Technology | Acoustical simulation of power unit encapsulation for construction and mining ap-
plications
Master’sThesisintheMaster’s programme in Sound and Vibration
AntonGolota
DepartmentofCivilandEnvironmentalEngineering
DivisionofAppliedAcoustics
VibroacousticsGroup
ChalmersUniversityofTechnology
Abstract
Modern drilling equipment is normally driven by a dedicated hydraulic power unit,
sometimesmountedonthemachineandsometimesastandaloneunit. Thepowerunit
is the most important noise source when drilling using rotary methods. The power
unit is normally equipped with an encapsulation in order to protect the components
within the unit from surrounding environment and to protect operator and close-by
workersfromhazardslikerotatingcomponentsandnoise. Suchanenclosurehastobe
mechanicallyrobustwithhighnoiseinsulationandsufficientcoolingcapacity.
Thepurposeofthisstudywastoexaminepossibleconceptsinordertofindtheopti-
malsolutionfulfillingthecriteriaabove,i.e. findamechanicallyrobustsolutionforan
enclosure with good cooling capability and good noise reduction. The study consists
ofthreeparts. Thefirstparthasageneraldescriptionofthenoiseinsulationcapability
ofthepowerunit. Thispartexplainsmeasurementsthatweredoneoncompletepower
pack and on its components. In the second part building of computer models of sepa-
rateenclosurecomponentsusingFiniteElementAnalysis(FEA)andStatisticalEnergy
Analysis (SEA) technique is explained. Also, the validation of the modeling results
was done using the results from measurements. The concept presented in the second
partofthestudyalsotakeseffectofairflowintoaccount. Thirdpart: refinedmodeling
on critical parts like air in- and outlets using acoustical FEA was done. Based on the
FEA modeling, an improved prototype baffled panel was built. The computer mod-
elsdevelopedwerethenvalidatedusingresultsfrommeasurements. Soundreduction
propertiesoftheimprovedenclosurewereestimated.
Results described in this thesis show that acoustical computer model of the power
iii |
Chalmers University of Technology | 1. Introduction
Different theoretical models [Lyon63], [Jac66] for sound reduction of enclosures
were developed during the past decades. Lyon built models for different frequency
ranges. Heinvestigatedthefollowingcases: wallandaircavityarebothstiffnesscon-
trolled; cavity is stiffness and wall is resonance controlled; wall and cavity are both
resonance controlled. The Jackson model assumes that the enclosure and the source
are infinite. In his model he showed that negative transmission loss is possible at low
frequency. Also,JacksonmentionedthattheHelmholtzresonatoreffectcouldoccurin
theenclosurewithopening.
In the article [Old91-1] published by Oldham and Hillarby, authors developed low
and high frequency models of acoustical enclosures. In the second part of the article
[Old91-2] they validated their models by comparison of predicted and experimental
results. One of the suggested models was developed with help of statistical energy
analysis.
In the article [Per10] Pereira, Guettler and Merz used hybrid a SEA-FE model de-
veloped in VA-one software to model the interior noise in a vehicle. They built FE
modelsofdifferentlyshapedleaksandinvestigatedtheirtransmissionlossproperties,
thenSEAmodelswerepopulatedwiththoseresults.
Different aspects of sound transmission through the aperture, as well as negative
transmission phenomenon were explained in articles [Sau70] , [Old93] and [Mec86].
Theoretical estimation of transmission loss of small circular holes and slits was done
by Gomperts and Kihlman in their publication [Gom67]. They compared results ob-
tained from their model with measured transmission loss. The authors claimed that
evensmallslitstransmitaconsiderableamountofsoundenergyoverwholefrequency
range. Inthearticle[Sga07]theauthorspresentdifferenttheoreticalmodelsthatallow
to predict transmission loss through openings with different size and geometry. They
investigated effects of diffuse field and normally incident sound load on transmission
loss. An experimental procedure of measuring transmission loss of the apertures is
describedin[Tro09].
Inthearticle[Mug76]Mugridgeinvestigateddifferenttypesofthefansfromacoustic
and aerodynamic points of view. He showed that radiator properties are linked to fan
performanceandthatemittednoisefromcoolingsystemcouldbereducedwithcareful
selection of fan and heat exchanger. In his research Mugridge showed that increasing
the area or the thickness of radiator’s plates could decrease the emitted sound power
level from fan up to 13 dB. Tandon [Tan00] explains ways of noise reduction from
machines. He achieved a fan noise reduction of 10 dB only by increasing the mass of
thefanbase.
4 |
Chalmers University of Technology | 2.4. StatisticalEnergyAnalysis
whereIisanaverageintensityoverthescannedsurface,[W/m2]
S-crosssectionareaofthebaffledpanel,[m2]
Thetransmissionlosscanbefoundfromequations2.19and2.20.
Thecalculatedandmeasuredtransmissionlosscantakenegativevalues,whichisalso
found in results of the theoretical models and the experiments in the literature (see
[Old93],[Sga07],[Tro09],[Sau70],[Old93],[Mec86],[Gom67]). Ofcourse,inreality
thetransmittedpowercannotbelargerthantheactualincidentpower. Therearethree
assumptionintheformulasforthecalculatedandthemeasuredtransmissionlossthat
may cause the negative values. They all are related to the estimation of the incident
power. First, for the larger openings the energy density is considerably larger in the
neighborhoodoftheopening,whilethecalculationsassumeaconstantenergydensity
in the whole enclosure or the sending room. Second, a significant amount of power
can be transported at oblique angle through the large openings. The calculations in
the SEA model and in the experimental results assume that only the normal compo-
nent of the oblique waves transports the power through the aperture face. Third, for
the small apertures, the edge effect may be significant, therefore considering only the
power incident on the area of the aperture (and not including the edge effect) leads to
anunderestimationoftheincidentpower. Thiscangiveatransmissionlosslargerthan
unity. The error in the first and the third assumptions originates from the increased
energydensitylocallyaroundtheaperture[And12].
The apertures in the enclosures can create Helmholtz resonator cavities, which in-
crease the noise emitted from a source housed within them. The transmission loss
through the opening could be negative in the vicinity of the aperture’s resonant fre-
quencies[Lon06]. Thetransmissionlossbehaviordependsonthesizeandthegeome-
tryoftheapertureandoncharacteristicsoftheincidentacousticfield.
2.4. Statistical Energy Analysis
The statistical energy analysis (SEA) has beed widely used and applied to the differ-
entnoiseandvibrationcontrolproblems. TheSEAallowstocalculatetheenergyflow
betweentheconnectedresonantsystems. Statisticalanalysisdoesnotgiveanyexactin-
formationonthesystembehavior,insteaditpresentsaveragevaluesoverthefrequency
band and average value for an ensemble of systems which are nominally identical to
theactualone,butwithacertainstatisticalspread. Subsystemswithmanylocalmodes
aretypicallyrepresentedusingStatisticalEnergyAnalysissubsystems. InSEAthelocal
modesofsubsystemsaredescribedstatisticallyandtheaverageresponseofthesubsys-
tems is predicted. It is usually not necessary to provide many details when modeling
the subsystems using SEA. Therefore, SEA is suitable for modeling the vibro-acoustic
systems at the design stage when detailed information about system properties is not
11 |
Chalmers University of Technology | 2.5. SEAinVA-one
whereE-totalenergyinthesysteminthefrequencyband
n-modaldensity
w -bandwidth
4
With assumption that each resonant mode in the system has the same energy and
that the coupling of the individual resonant mode of the first system with each reso-
nantmodeofthesecondsystemisapproximatelythesame,followingequationcanbe
written
h n
21 = 1 (2.30)
h n
12 2
Equation2.30indicatesthatwhen E = E (equaltotalenergiesintwosystems)then
1 2
more energy is transferred from the system with smaller modal density to the system
withbiggermodaldensitythaninanotherdirection.
Combinationofequations2.28,2.29andequation2.30givesthefollowingformula
W = wn h (E E )Dw (2.31)
12 1 12 m1 m2
whereW -netpowerflowbetweenthesystem1andthesystem2intheband w
12
4
centeredatw
E , E -modalenergiesforthesystem1andthesystem2.
m1 m2
The principle of the SEA method is given by equation 2.31, which is a simple alge-
braic equation with an energy as an independent dynamic variable. It states that the
netpowerflowbetweentwocoupledsystemsinanarrowfrequencyband,centeredat
afrequencyw,isproportionaltothedifferenceinthemodalenergiesoftwosystemsat
the same frequency. The flow is from the system with the higher modal energy to the
onewiththelowermodalenergy.
Equal energy of the modes usually exists if the wave field is diffuse, therefore, SEA
works better in the middle and high frequency range. At low frequency the finite ele-
mentanalysisdescribeseachmodeexplicitly[Ver06].
2.5. SEA in VA-one
TheVA-onesoftwareisavibro-acoustictoolbasedonthemethodsofthestatisticalen-
ergy analysis. It allows to construct the mathematical models of the energy flow for
the complex structures. The implementation of statistical energy analysis in VA-one is
basedonawaveapproach. Inthewaveapproach,whichwasusedhere,asystemisdis-
cretizedintoaseriesofthesubstructures(beams,plates,shells,acousticducts,acoustic
cavities) that support the wave propagation. Each substructure contains a number of
wave types such as bending, extensional, shear waves, etc. Each of this wave type
is represented by a separate SEA subsystem. The subsystem can be viewed from the
13 |
Chalmers University of Technology | 2. Theoreticalbackground
modal and the wave view points. From the modal point of view the system is a col-
lectionoftheresonantmodes, andfromthewavepointofviewitisacollectionofthe
propagatingwaves.
TheSEAmodelconsistsofthreemainmodalobjects: subsystems,junctionsandload
sources.
2.5.1. SEA subsystems
The SEA subsystem objects are used to create various structural and acoustic compo-
nentsthattransmitenergythroughavibro-acousticsystem.
TheSEAmoduleofVA-onesoftwarespecifiesthreemaintypesofsubsystems:
• SEAstructure
• SEAcavity
• SEAsemiinfinitefluid
The dimensions of the structure and cavity subsystems are assumed to be large or un-
certain as compared with the wavelength. The subsystems contain both direct and
reverberantfields. Thesemiinfinitefluiddescribesonlythedirectfieldpropagation.
The SEA plates and shells are used to describe two-dimensional wave propagation
in the structural systems. The plate consists of a surface with three or more boundary
edgesdefinedbythenodes.
There are four main types of plate subsystems: flat, single-curved, cylinder and
double-curvedshells. Thedifferencesbetweenthesubsystemsisrelatedtowhetherthe
stiffening effects are accounted for when calculating the properties of the wave fields
ofthesubsystem. Onlyflatplateswereusedinthemodelsdescribedinthiswork. The
curvatureisnegligibleintheflatplate.
Eachplatesubsystemcanrefertophysicalpropertiesoffivedifferenttypesofplates:
uniform, sandwich, composite, laminate and ribbed. Uniform and ribbed plates were
widely used in the SEA models described in this work. Wave field properties for the
uniform plate were calculated using the thin plate theory. Presence of ribs influences
the wave field properties of the ribbed plate. Ribs are defined by physical properties
of a beam and position on the plate. The beam property calculator script computes a
beamphysicalpropertiesfortypicalbeamsections.
The SEA acoustic cavity subsystems are used to represent wave propagation in the
three dimensional space. All cross-sectional dimensions of the cavity are assumed to
be large as compared with a wavelength. The acoustic cavity consists of set of faces
thatenclosethevolumeofacousticfluid. Thepropertiesofthewavefieldarebasedon
the speed of sound within the cavity. The overall damping of the cavity can be speci-
fied either as a damping loss factor or an absorption calculated from the noise control
14 |
Chalmers University of Technology | 2.5. SEAinVA-one
treatments or an overall average absorption for the cavity. The absorption computed
from the noise control treatment was assigned to the cavities used in the models here.
The absorption of a cavity relates the damping of the cavity to the surface area of the
cavity. The dissipated power is mainly scaled with changes of the surface area of the
cavitybutnotwiththevolumeofthecavity. Therigidboundaryconditionisassumed
for the SEA acoustic cavity. To model certain cavity face with transparent boundary
condition, one should connect it to an adjacent acoustic cavity with special properties
ortoaSemi-InfiniteFluidobject.
The SEA Semi-Infinite Fluid (SIF) object is a sink, and not exactly the SEA subsys-
tem, since it does not contain a reverberant field. In the SEA equations it appears as a
dampinganditcanbeusedtopredictthesoundpressurethatradiatesfromthesubsys-
tem into an unbounded exterior acoustic fluid. VA-one calculates the power radiating
into the semi-infinite fluid with the following assumptions: first, that radiation from
thesubsystemoccursintothebaffledacoustichalfspace,andsecondthatthevibration
fieldsofthesubsystemsthatareconnectedtotheSIFobjectdonotcorrelate.
2.5.2. SEA junctions
The junctions are used as connections between the various subsystems in the model.
They describe the way in which energy is transmitted between the different subsys-
tems. TherearethreetypesofjunctionsinVA-one:
• Point
• Line
• Area
The point junction assumes that the connection is small compared with a wavelength.
Line junction assumes that the connection is large compared with a wavelength. The
areajunctionassumesthattheconnectionisfiniteandbaffled. Allindividualjunctions
are assumed to be incoherent. The junction can be a hybrid junction that couples the
FE and the SEA subsystems together. All types of junctions were used in the models
describedinthiswork.
The point junction describes the transmission of the vibration energy between the
coupledSEAsubsystems. Itcanbeusedtodescribeconnectionbetweenthesubsystems
thataresmallcomparedwithawavelength.
The line junction describes the energy flow between the SEA subsystems coupled
along a line. The line junction describes connection between the subsystems that are
continuousandlargecomparedwithawavelength.
TheareajunctionrepresentsenergytransmissionbetweentheSEAplateorshelland
the acoustic cavity or between two acoustic cavities. An FE area junction is used to
15 |
Chalmers University of Technology | 2. Theoreticalbackground
coupleafaceoftheFEstructuralsubsystemtothenearbyFEacousticsubsystem. The
hybrid area junction is used to couple the FE structural or acoustic subsystem with
the SEA fluid subsystem. The hybrid area junction assumes a rigid baffle boundary
condition. TheimpedanceofeachSEAsubsystemisprojectedontotheFEmodeshapes
inthemodel.
LeaksandaperturesofdifferentshapescouldbeaddedtotheSEAareajunction. VA-
one supports rectangular, circular, and slit types of apertures. The user-defined leak
with user-defined transmission loss spectrum could also be assigned on the junction
area.
2.5.3. SEA load sources
Thesourcesareusedtomodelenergyinjectionintothesubsystemsinthevibro-acoustic
system. Implemented models were excited with either user-defined power or diffuse
acousticfieldexcitationorconstraintexcitation.
• User-definedpower. Thistypeofsourceisusedasadirectuser-definedmodelof
thepowerthatisappliedtothesubsystem.
• Diffuseacousticfield(DAF).Thistypeofsourceisusedtomodeladiffuseacous-
ticpressureload. ItcanbeappliedtothefaceoftheSEAsubsystemortheFEface.
The diffuse acoustic field is characterized by a band-limited RMS pressure spec-
trumthatdefinessurfacepressureacrossthesubsystemface. Thesurfacepressure
is an average of the surface pressure at a number of positions across the subsys-
tem. In the reverberation chamber the blocked surface sound pressure level is 3
dB higher than the interior sound pressure level. Diffuse field excitation should
beusedinconjunctionwiththesemi-infinitefluidwhendescribingtheexcitation
ofasubsystem.
Thistypeofthesourceisusedtomodelthediffuseacousticpressureloadapplied
to the FE face. The DAF excitation is represented by a “blocked cross-spectral
force matrix” for the FE model. Due to the diffuse field load on the given FE
face, the blocked force is computed using a special diffuse field reciprocity rela-
tionship. The reciprocity relationship relates the blocked force to the radiation
impedance for a given face. For the FE subsystem, the radiation impedance is
computed using the hybrid area junction formulation assuming that the face is
baffledandradiatesintoasemi-infinitehalfspace.
• Constraintexcitation. Thistypeofexcitationfixesaresponseofthesubsystemat
a known level. An input power that must be supplied to the subsystem in order
tosatisfythisconstraintbecomesunknownintheSEAequations.
16 |
Chalmers University of Technology | 2.6. FiniteElementsMethod(FE)inVA-one
2.6. Finite Elements Method (FE) in VA-one
ThesubsystemswithaveryfewlocalmodesareoftenbestrepresentedwiththeFinite
Element(FE)subsystems. IntheFE,thelocalmodesofthesubsystemsaredescribedde-
terministicallybasedondetailedinformationaboutthelocalpropertiesandboundary
conditionsofthesubsystem. AccuracyoftheresultsobtainedbytheFEmodeldepends
onhowexplicitlypropertiesandboundaryconditionsofthesubsystemaredescribed.
The FE subsystems are suitable for describing the response of the first modes and for
givingdetailedanswersondesignquestionsregardinglocalresponseofthesubsystem.
VA-onecanmodelboththeFEstructuralandtheFEacousticsubsystems.
TheFEacousticcavitysubsystemisusedtorepresentenclosedacousticfluidsinthe
vibro-acoustic model. Such subsystem can be used to extend SEA model using the
hybrid junctions. The FE acoustic cavity subsystem could be created by meshing the
existingSEAacousticcavitysubsystem.
The FE faces can be created from elements on the skin of the acoustic cavity. The
facesaretheinterfacesbetweentheSEAandtheFEsubsystems. Theyarealsousedfor
applyingtheNoiseControlTreatments(NCT)orexcitationtotheFEacousticcavity.
TheFEstructuralsubsystemisusedtorepresentstructuralcomponentsinthevibro-
acousticmodelthatarestifforthathaverelativelyfewmodes.
InVA-one,modes’shapesoftheFEsubsystemsareobtainedandusedasbasisfunc-
tions to describe response of the FE subsystems in the model. At each frequency of
interest a modal dynamic stiffness matrix is assembled. This matrix accounts for the
dynamic stiffness of modes, mass, stiffness and damping of any NCTs applied to the
FEsubsystems.
The excitation applied to the system is represented by an assembled modal cross-
spectralforcematrix. Thisisacomplexfullmatrixthatdefinestheauto-spectrumsand
the cross-spectrums of the forces applied to the modes. Then full random vibration
analysis is performed and the modal displacement response is computed at each fre-
quencyofinterest. TheresponseacrossthevariousFEsubsystemsisthenobtainedby
‘recovering’thenodalresponsedatafromthemodalresponses.
2.6.1. NCT, Foam and PEM
TheFoammodulewasusedtocreateadvancedmodelsofthenoisetreatment. Withthis
module it is possible to predict the structural-acoustic effects of complex noise treat-
ment that consists of several layers. The treatment layers can be applied to either FE
structuraloracousticsubsystemsortoSEAsubsystems. TheFoammoduleprovidessix
different layer models to recreate foam and fibrous materials. Seventh layer is a fluid
anditcouldbeusedtoinsertgapsbetweenthelayerstomodelunboundedconditions
betweenlayers.
17 |
Chalmers University of Technology | 2. Theoreticalbackground
The Noise Control Treatments (NCT) are multi-layered poro-elastic materials de-
signed to isolate structural and acoustic cavities and to provide damping and absorp-
tion to the individual subsystem. The Treatment Lay-up is one of the ways to model
the noise control treatment and it was used in the models described in this work. The
Treatment Lay-up calculates the mathematical model of the lay-up based on behavior
of the individual layers and it can be applied to the faces of the FE or the SEA subsys-
tems. ItisassumedthattheNCTismodeledasaninfinitelayer,whichinteractswitha
finiteregion.
The PEM is based on a finite element implementation of poro-elastic, elastic and
acoustic equations of motion. For each frequency and each group of contiguous PEM
subsystems, the PEM solver computes the finite element dynamic stiffness matrices
of a group of elements. Then it computes the coupling matrices of PEM elements
with degrees of freedom of the FE structural and acoustic subsystems coupled to the
PEM group. After that it projects the coupling matrices onto the structural and acous-
tic modes and condenses out PEM degrees of freedom from the matrix equation of a
coupledsystemtoobtainthemodalimpedancematrixofthePEMgroup.
2.6.2. Hybrid and custom calculated transmission loss
ThehybridtransmissionlosscalculatesthetransmissionlossbetweentheSEADiffuse
Acoustic Field and the SEA Semi-Infinite Fluid separated by the FE subsystem. The
hybridtransmissionlossiscomputedbyfindingthenetpowerradiatedintotheSemi-
Infinite Fluid and then normalizing it by the incident power. The incident power is
calculatedbasedonthesoundpressureovertheareaoftheface.
It is possible for the TL to be negative. The TL indicates the power transmitted into
thereceivingfluidnormalizedbythenetpowerthatisincidentonaspecifiedareaina
diffuse acoustic field. A negative TL indicates that more power is transmitted into the
receiving fluid than is incident on a blocked panel of the same area. The edge effects
mean that the actual power that is removed from the source fluid is greater than the
powerthatisincidentonthepanelifitwasblocked.
Thecustomcalculatedtransmissionlosscouldbeobtainedinacousticcavity-panel
under test - SIF scenario. The volume of the acoustic cavity should be virtually in-
creased(e.g. 1000m3). Iftheacousticcavityisexcitedwithconstraintpressurethenthe
incidentsoundpowercanbecalculatedwithequation2.25andthetransmittedpower
can be collected from the SIF object. The transmission loss can be obtained from the
incidentandthetransmittedsoundpowerratio(seeequations2.19and2.20).
18 |
Chalmers University of Technology | 3.2. Airflowimpedance
As the air flows through the enclosure it will experience losses due to different ob-
stacles. Theconvenientwayofexpressingstaticpressurelossisintermsofthevelocity
head at the specific point. For example, there are friction losses at the inlet (see point
1 at Figure 3.1 ). These losses can be expressed as the ratio of the velocity head to the
staticpressurelossattheinlet. Thelosseswerejudgedtobeequaltoonevelocityhead
[Ste91]. Then the equation 3.5 can be used to find the actual pressure loss in terms of
centimetersofwater,basedonthevelocityoftheairflowatthatpoint.
2
V
H = (3.5)
v
1277
✓ ◆
whereH -velocityhead,[cmH 0]
v 2
V-airvelocity,[cm/s]
and constant 1277 is a simplified value that contains acceleration of gravity and air
densityat20ºC.
Ifmoreenergyislostintheenclosure, alargerfanisrequiredtosupplythatenergy.
Airvelocitymustbelowtoavoidlargelosses. Thesystemwithhighlosseswillrequire
afanthatcansupplytheneededcoolingofairathighpressure.
Since detailed airflow analysis of the encapsulated power pack is out of scope of
this work, a simple model was built to analyze the pressure drops. Model was based
on the airflow inside the duct with the inlets, the outlets, and geometry inconstancy
(see Figure 3.1). At the points marked with numbers, maximum static pressure loss is
expected.
Figure3.1.:Sketchofthesimplifiedmodelusedtodescribetheairflowimpedance.
ThelossesintermsofvelocityheadsforthepointsmarkedatFigure3.1aredescribed
below.
1)Airinlet.
The number of velocity heads losses depends on type of a duct opening. In our
model the plain duct ends were used. The duct end of this type can be expected to
show a static pressure loss of 0.93 velocity heads and could be rounded to 1.0 velocity
head [Ste91]. The static pressure loss in terms of velocity heads losses can be found
fromthefollowingequation:
21 |
Chalmers University of Technology | 4. Measurements
thanthenoisefromtheairflow(seeTable4.1formajornoisesources). Thus,thenoise
fromtheairflowcanbeneglectedinthemodel.
4.5. Conclusion
The measurements, which are described in this chapter, were made in the field and
in the laboratory. Obtained sound power levels of the main noise contributors will be
used to load the VA-one model of the complete power pack unit. The sound power
spectrums of three cases will be used to validate the results of computer simulation
of the same cases. The mean sound pressure level measured one meter away from
the power pack will be used to validate the power pack model loaded with the sound
powerspectrumofthereferencesource. Theairflowmeasurementsarerequiredforthe
fluiddynamicsimulation.
Theabsorption measurementsareneeded tocomparethe resultswiththe treatment
designed in VA-one. Although, the absorption was measured in the real encapsula-
tion in the field and in the laboratory under the controlled conditions, the agreement
between the absorption coefficients obtained and provided by a sub supplier is not
very good (see Figure 4.10). The absorption coefficients provided by a sub supplier
were measured in the impedance tube, which assumes one dimensional normal inci-
dentwaves. Ontheotherhand,theimpulsetechniquethatwasusedinthelaboratory
andduringthefieldmeasurementsassumesrandomincidentangle. Moreover,thedif-
fuse field limitations affect the results in low frequency range. Therefore, the manner
of how the samples were installed in the reverberation room as well as the edge ef-
fectandtheareacoveredcouldhaveasignificantimpactontheabsorptioncoefficients
obtained.
The transmission loss of all panels was measured and ranked. Both field and labo-
ratorymeasurementsdependonthediffuseacousticfieldassumption,henceresultsin
low frequency are not reliable. The field and laboratory results could not be compara-
blesincethedirectfieldwasdominantduringthefieldmeasurementswhilethediffuse
field was supreme in the laboratory. The measurements in the laboratory were more
controlledascomparedtothefieldmeasurement. Therefore,itwasdecidedtovalidate
thetransmissionlossobtainedfromtheVA-onemodelsofthefrontbaffledpanelwith
theresultsfrombothlaboratoryandfieldmeasurements.
40 |
Chalmers University of Technology | 5.6. Conclusion
propertieswerecomparedwiththedataobtainedduringthefieldmeasurements. The
transmission losses of the modeled and measured panels have a very good agreement
(seeFigure5.16). Bothmodeledandmeasuredtransmissionlossspectrumshaveadip
around600Hz. Thisnegativetransmissionoccursbecauseoftheopening.
The models that correspond to the laboratory setup were made with the NCT and
PEM representations of the absorption. The mixed results of the transmission loss has
a good accord with the measurements (see Figure 5.21). The peak around 800 Hz is
absorption controlled and correspondence could be better with the better absorption
layermodel.
The results of the fluid dynamic simulation show that there is a significant pressure
dropduetoobtuseangleoftheairinlet. FromFigure5.23itispossibletoseetheveloc-
itycontour. Blueregionsshowslowairvelocity. Slowairvelocityleadstothepressure
drop (see Figure 5.24) that affects a fan performance. In such conditions a fan should
overwork to deliver the same amount of the airflow into the system. Better designed
bafflesshoulddecreasethepressuredropandconsequentlyimprovethecoolingsystem
performance.
Thecompletemodelofthepowerpackwasbuiltbasedonthedataobtainedfromthe
simulationsandthemeasurements. Threemodelswerebuilttovalidateassignedload
onthesystem,tomeasurethesoundpressurelevelatonemeterdistance,andtomea-
sure the emitted noise pollution from the working unit. The results of the simulation
were compared with the results obtained during the measurements and are presented
atFigures5.30-5.33. Thecomparisonofthespectrumsletusseeaverygoodagreement
between the modeled and real power pack. The junction areas that correspond to the
baffled panels were populated with the transmission loss spectrums obtained during
the measurements. The junction area that corresponds to the front baffled panel was
populated with the measured and modeled transmission loss spectrums. The overall
difference between the measured and modeled cases is insignificant and the total dif-
ferenceisaround0.5dB(A).
69 |
Chalmers University of Technology | 6. Improved front baffled panel
Themarketrequirespowerpackswithgreateroutput. Theincreaseofengine’spower
leads to the increase of noise from the engine and the cooling system. Therefore, an
improved encapsulation design was proposed. The weakest part of current encapsu-
lation is the front baffled panel (see Figure 4.16 for the transmission loss of the baffled
panels). The front baffled panel of the current encapsulation should be improved in
ordertoachievebetteroverallsoundreductionpropertiesoftheencapsulation.
The model of the improved baffled panel was built in VA-one software. Its trans-
mission loss properties were investigated in the model that simulates the laboratory
environment. ThefluiddynamicsimulationwasdoneinANSYSsoftware. Theproto-
typeoftheimprovedbaffledpanelwasbuiltanditstransmissionlosswasmeasuredin
the laboratory. The SEA model of the power pack was updated with the transmission
lossoftheprototypebaffleandoveralldecreaseoftheemittednoisewasestimated.
6.1. Developing a new baffled panel
Improvedfrontbaffledpanelmusthavebettersoundreductionproperties. Thedimen-
sions of the improved panel must be the same as the dimensions of the baseline one,
specifically,thedepthmustbe30cmorless. ThebaffleslotisdesignedtobeLshaped
instead of straight and to repeat smooth wing profile. All slots in the baffled panel
are subdivided into two or three parts. The air inlet’s shape is changed from rectan-
gular into round (see Figure 6.1). The sides of all slots are treated with the absorption
material,whichshouldcompensateforadecreaseoftheareaoftheairinlet.
71 |
Chalmers University of Technology | 7. Summary
The computer model of the encapsulation for the power pack used in the surface core
drilling rig CT20 was built, its noise reduction properties were investigated and im-
proved. ThisworkshowsthatitispossibletobuildtheSEAmodelofthepowerpack
and populate it with data from the simulations of different power pack components.
Theresultsofthesimulationswerecomparedandshowedagoodagreementwithmea-
sured data. This approach allows to build a model during the pre-study stage of new
productdevelopment. Besides,thestatisticalenergyanalysisislesssensitivetouncer-
taintiesingeometrythanthefiniteelementmethod.
The SEA model of the power pack can be used during the development of the next
tier of the surface core drilling rig CT20 unit. In addition, current model can be used
to set demands on new encapsulation properties and to investigate weak parts of the
enclosure.
Possibleimprovementsofthecomputermodelandofthepowerpackunitarelisted
inthenextsubsection.
7.1. Future work
The model of the absorption can be improved by creating VA-one material with the
same physical properties as the real absorption. This can be done by measuring foam
sampleswithdifferentthicknessintheimpedancetubeandcalculatingBiotproperties
inthecommercialsoftwareFoam-X.
Anadvancedmodelofusedtreatmentcanbeimplementedasaporo-elasticmaterial
subsystem. InthecurrentversionofVA-oneitisimpossible,therefore,absorptionwas
modeledasafoamwithoutavisco-elasticlayer.
An analytical model for prediction of the transmission loss of big apertures with
the treatment can be developed for random and normal angle incident sound. Such
a model will reduce the “cost” on prediction of reduction properties of the whole en-
capsulationandwillsignificantlydecreasethecalculationtime.
A better prototype baffled panel has to be made to estimate the real improvements
oftheproposeddesign. Theredesignedbaffledpaneldescribedinthisworkshowsthe
ideaofhowthepanelcanbeimproved. Theprototype,whichwasbuiltinthisworkis
notsustainableandcannotbeusedinrealapplication.
87 |
Chalmers University of Technology | Optimization of bucket design for underground loaders
Master’s Thesis in the Master’s programme Product Development
JONAS HELGESSON
Department of Product & Production Development
Chalmers University of Technology
Abstract
An optimized bucket design is important for increasing productivity and loading
performance for underground loaders. Design theories are today difficult to evaluate
due to lack of verification methods. Later year’s development of simulation software
and computers has made it possible to verify the design by simulating the loading
process. The purpose with this thesis has been to both develop and use a simulation
model of the loading process for one of Atlas Copco's underground loaders.
A simulation model was developed in the program EDEM. EDEM uses the Discrete
Element Method for simulating granular materials, which in this case was blasted
rock. Factors such as particle flow, particle compression and loading setup adds
complexity and uncertainty to the task. Nevertheless was a model that was able to
detect force variations from small design changes developed.
The tractive effort is the horizontal force the loader can generate. This the critical
factor when loading rocks, with use of EDEM different bucket designs could be
evaluated by studying the horizontal force in the simulations. The simulation model
was compared with practical tests.
The edge thickness of the bucket lip was the individual design parameter that had the
largest influence on the horizontal force; a thin edge generated lower force. In general
a bucket with sharp and edgy shape gave lower forces. The attack angle (bottom
angle) had low influence on the horizontal force.
Key words: Bucket design, bucket filling, DEM simulation, underground loader
I |
Chalmers University of Technology | Preface
The purpose of this thesis is to develop and use a simulation model of the loading
process for buckets of underground loaders. The thesis was initiated both by Atlas
Copco Roc Drills AB (Atlas Copco) and Chalmers University of Technology. The
project has been financed by Atlas Copco and performed at the department for
underground loaders in Örebro, Sweden. The time frame for this project has been
January 2010 – June 2010.
The student performing the work has been Jonas Helgesson. Supervisor from Atlas
Copco has been Stefan Nyqvist and assistant supervisors have been Anders Fröyseth,
Andreas Nord and Morgan Norling. Examiner and supervisor from Chalmers has been
senior lecturer Göran Brännare from the Department of Product and Production
Development, Chalmers University of Technology, Göteborg, Sweden.
I would like to thank all in the department for underground loaders at Atlas Copco for
their support and help, especially Stefan Nyqvist, Anders Fröyseth, Martin Hellberg,
Kjell Karlsson and Andreas Nord. I would also like to thank my supervisor at
Chalmers, Göran Brännare.
Örebro 23 of June 2010
rd
Jonas Helgesson
V |
Chalmers University of Technology | 1 Introduction
1.1 Project Background
This report is the result of a master’s thesis conducted at Atlas Copco Rock Drills AB
(Atlas Copco) in Örebro during spring 2010. It covers 30 ECTS (European Credit
Transfer System) points.
Atlas Copco wants to investigate the design of the buckets for underground loaders
used for loading rocks. Within the company there are some theories and knowledge
about how the design of the bucket influences the loading performance, but the lack of
verification of this information makes it difficult to optimize the design.
Development of simulation software and computers has made it possible to simulate
the loading process of a bucket, this development has realized possibilities to gain
knowledge in this area.
A computer software named EDEM (Engineering Discrete Element Method) has
successfully been used for simulations of rock crushing at Chalmers. Other actors
have used EDEM for bucket simulations, this emphasizes the possibility to use EDEM
for simulating the loading process.
1.2 Company Background
Atlas Copco AB is a Swedish company founded in 1873. Atlas Copco is a supplier of
industrial productivity solutions and the product portfolio cover a wide range of areas;
compressed air equipment, generators, construction and mining equipment, industrial
tools and assembly systems. In total, Atlas Copco had 31 000 employees in 2009, a
revenue of 64 BSEK and was active in more than 170 countries.[1]
An American company, Wagner Mining Equipment Company, was bought by Atlas
Copco 1989 [13]. Wagner manufactured underground loaders and trucks. After
Wagner's merge with Atlas Copco, Atlas Copco was able to supply equipment for
both bursting, loading and transportation of ore. Wagner's activity was 2005 moved to
Örebro where the underground division is centralized today, both development and
manufacturing of mining equipment takes place in Örebro. All loaders and trucks are
today marketed under the Atlas Copco brand.
1.3 Problem Description
To maximize efficiency it is important get a fully filled bucket in one loading cycle.
The tractive effort (horizontal force) is a critical factor related to the bucket filling. By
reducing forces with an efficient bucket design, time, wear and energy consumption
will be reduced. Sometimes the bucket is not fully loaded in one cycle and the process
has to be repeated, this should be avoided.
1.4 Purpose
The purpose of this thesis is to develop a simulation model of the loading process for
buckets of underground loaders. The influence from different design parameters will
then be investigated, to generate guidelines for the design of the next generation of
buckets at Atlas Copco.
1 |
Chalmers University of Technology | Another aspect of the thesis is to investigate the potential of the computer software
and look for more areas of application than only design of buckets.
In an educational view the purpose of this thesis is to gain knowledge in the following
areas: The discrete element method, working with simulation programs and get an
understanding for the relation between reality and a virtual simulation.
Figure 1: Atlas Copco's scooptram ST7.
1.5 Limitations
The main focus will be on investigating the bucket design. One moving trajectory for
the bucket will be defined and used in the simulations. The thesis will focus on Atlas
Copco’s scooptram ST7, see Figure 1, with a standard bucket. No full scale testing of
modified buckets will be performed. Other design aspects, such as wear resistance and
rigidity will not be prioritized.
If time is available also other sizes of loaders will be investigated.
1.6 Method
A simulation model of the loading process for buckets of underground loaders will be
developed. The model will then be used to investigate which parameters that affect
the loading result. The final step will be to optimize the loading parameters to achieve
an efficient loading process. Program to be used for this task is EDEM, also ADAMS
can be necessary to use. Real tests will be performed to analyze today’s existing
buckets and to generate result to use for verification of simulations.
Existing knowledge on the market and within Atlas Copco will be collected and
compiled.
2 |
Chalmers University of Technology | Figure 3: TThhee rroocckk iiss ttrraannssppoorrtteedd ttoo ttrruucckk wwiitthh bbuucckkeett iinn ttrraammmmiinngg ppoossiittiioonn..
Figure 4: Thee llooaaddeerr iiss uunnllooaaddiinngg tthhee rroocckk into a mine truck.
2.3 LLooaaddiinngg MMaatteerriiaall
IInn mmiinniinngg tthhee pprrooppeerrttiieess ooff tthhee mmaatteerriiaall vvaarriieess bbeettwweeeenn eeaacchh llooaaddeedd bbuucckkeett [[16]. The
ddeennssiittyy ooff tthhee rroocckk vvaarriieess wwiitthh tthhee mmiinneerraall ccoonntteenntt iinn tthhee rroocckk aanndd tthhee kkiinndd ooff wwaaste
mmmaaattteeerrriiiaaalll,,, eee...ggg... gggrrreeeyyyssstttooonnneee ooorrr gggrrraaannniiittteee... SSSiiinnnccceee ttthhheee cccooommmpppooosssiiitttiiiooonnn ooofff ttthhheee mmmaaattteeerrriiiaaalll vvvaaarrriiieeesss ooovvveeerrr
the mmmooouuunnntttaaaiiinnn ttthhheee dddeeennnsssiiitttyyy wwwiiillllll vvvaaarrryyy... NNNooorrrmmmaaallllllyyy aaannn aaavvveeerrraaagggeee dddeeennnsssiiitttyyy iiisss ssspppeeeccciiifffiiieeeddd fffooorrr aaa
mmmiiinnneee,,, bbbuuuttt ttthhhiiisss vvvaaallluuueee dddoooeeesss nnnooottt gggiiivvveee ttthhheee wwwhhhooollleee tttrrruuuttthhh aaabbbooouuuttt ttthhheee mmmaaattteeerrriiiaaalll dddeeennnsssiiitttyyy...
When the rock is blasted iiittt iiisss dddiiivvviiidddeeeddd iiinnntttooo sssmmmaaallllll pppiiieeeccceeesss wwwiiittthhh aaa lllaaarrrgggeee vvvaaarrriiiaaatttiiiooonnn ooofff sssiiizzzeee
and shape... TTThhheee dddiiiffffffeeerrreeennntttiiiaaatttiiiooonnn iiisss dddeeepppeeennndddeeennnttt ooofff rrroooccckkk mmmaaattteeerrriiiaaalll,,, eeexxxppplllooosssiiivvveee ccchhhaaarrrgggeee aaannnddd
ppllaacceemmeenntt ooff tthhee eexxpplloossiivveess [[16]].. TThhee llaarrggee vvaarriiaattiioonn iinn ssiizzee aanndd sshhaappee mmaakkeess iitt
ddiiffffiiccuulltt ttoo ssppeecciiffyy aann aavveerraaggee mmaatteerriiaall ffoorr ssiimmuullaattiioonn..
Figure 5: MMMaaattteeerrriiiaaalll uuussseeeddd fffooorrr pppeeerrrfffooorrrmmmiiinnnggg llloooaaadddiiinnnggg ttteeessstttsss,,, ttthhheee yyyeeellllllooowww rrruuullleeerrr hhhaaasss aaa llleeennngggttthhh ooofff 555000cccmmm...
4 |
Chalmers University of Technology | 2.4 General Bucket Background
Compared to standard wheel loaders the bucket equipped on underground loaders has
a more robust design, see Figure 6 and Figure 7. The more robust design for
underground loaders is required due to intensive operation conditions in mines.
Severe wear on the bucket during operation leads to a bucket having a lifetime of
approximately 8000h, the lip is usually replaced after 500-1500h. To replace a worn
out bucket is an expensive cost for mining companies.
The hard and aggressive loading material is difficult to penetrate with a bucket. Low
penetration force is therefore a prioritized area when designing buckets for mining
industry.
The height and width of the bucket is limited by the machine size. The bucket shall be
2-3dm wider than the machine and not reach above the machine body.
Figure 6: Volvo wheel loader bucket. Figure 7: ST7 GIII standard bucket.
2.5 Bucket Loading Theory
Figure 8 and Figure 9 seen below are taken from a study performed by Coetzee and
Els [4]. The bucket in the study was a dragline bucket and the test results are from a
down scaled test. The test was done using corn as granular material. Figure 9 shows
the percentage of the total drag force during the loading cycle that acts on each part of
the bucket. Test conditions in this study are different compared to actual conditions
for an underground loader. However, several interesting observations can be made.
During the entire cycle 25-30% of the drag force acted on the lip, this makes the
design of the lip important. When material reach the rear parts of the bucket, between
300-600mm see Figure 9, these parts become a larger part of the drag force. There
were no side plates on the bucket used in the test.
Figure 8: Measured forces for dragline bucket. Figure 9: Forces on each bucket part.
5 |
Chalmers University of Technology | Maciejewski [7] did a study where the aim was to optimize the digging process and
bucket trajectories. It was shown that the most energy efficient bucket is the one
where the pushing effect on the back wall is minimized.
Esterhuyse [5] and Rowlands [9] investigated the filling behaviour of scaled dragline
buckets. The bucket with the shortest fill distance was found to produce the highest
peak in drag force. When filling an underground loader the peaks in drag force are
critical since at these points the machine start to slip.
An interesting design parameter on the bucket is the attack angle (bottom angle). The
effects of the attack angle have been studied by Maciejewski [7]. Attack angles of 5˚,
15˚ and 30˚ were tested, the result is seen below in Figure 10 and Figure 11. The
bucket was moved into the pile horizontally.
β - inclination of the initial phase of trajectory (equal to 0˚ in test)
α - inclination of soil pile (equal to 50˚ in test)
δ - attack angle
Figure 10: Scheme of rotational cycle. Figure 11: Specific energy versus the attack
angle for different bucket motion.
When only a translational motion was applied to the bucket the total energy for
loading increased dramatically with a higher attack angle, see Figure 11. But when a
rotational motion was added to the bucket the change in energy was close to zero.
Atlas Copco's loaders are using a rotational motion. According to Maciejewski's study
a variation of the attack angle, on Atlas Copco's bucket, will have a low impact on
loading energy.
2.6 Description of Discrete Element Method
The Discrete Element Method (DEM) is a computational method used for calculating
the behaviour of e.g. a granular material. In the following text a brief introduction is
presented.
In modelling of a ordinary engineering problem a number of well defined components
normally are involved. The behaviour of each component is either known or can be
calculated. The problem is solved by using mathematical functions that describes the
individual components behaviour and their interactions [6]. This is a common
approach to use for solving mechanical problems.
6 |
Chalmers University of Technology | For more complex problems, e.g. components of flexible material, the finite element
method (FEM) is used. FEM divides the component into small elements. The
behaviour of each element is approximated by simple mathematical description with
finite degrees of freedom. With use of FEM a correct behaviour of the component can
be calculated. FEM is used for individual components/bodies with known boundary
conditions.
Figure 12: Left, typical FEM application. Right, typical DEM application.
A problem, such as calculating the motion of a pile of rock, generates too complicated
computations if every stone is taken into account. The FEM approach is based on that
the material remain in the same position throughout the simulation and is there for not
suitable to use for simulations of a granular material.[6]
The most common method to use for analysing granular material, e.g. a pile of rock, is
DEM. DEM is treating the rock pile as a collection of independent units, each stone in
the pile is an individual unit and each stone is free to move according to its forces.
The behaviour of the stones is calculated from contact between them, velocity and
deformability. With use of DEM a good approximation of blasted rock can be reached
if the problem dimensions, material properties and boundary conditions are properly
defined [6]. Compared to FEM, DEM is used for units in motion and FEM is focused
on material deformation. Typical examples of FEM and DEM are displayed in Figure
12.
2.6.1 DEM Program Theory
A DEM program involves a lot of complicated computations. A simplified description
of the computations process follows below:
A timestep, ∆t, is defined in the DEM program. ∆t specify with what interval the
position and motion of the particles should be recalculated. The following loop (1-5)
is run every timestep in the DEM program and describes more into detail how a DEM
program works.
1. Position and motion of each particle is registered
2. Particle contact detection
The problem domain is divided into a cubical mesh. Position of
particles in the same "cube" are compared to track if intersection
appear.
3. Contact zones are determined
4. Intersection distances are determined
7 |
Chalmers University of Technology | 3 Execution
3.1 Practical Teesstt of Scoop Loading
A practical loading ttteeesssttt wwwaaasss pppeeerrrfffooorrrmmmeeeddd iiinnn cccooonnnnnneeeccctttiiiooonnn tttooo LLLooovvviiisssaaagggrrruuuvvvaaannn sssiiitttuuuaaattteeeddd iiinnn
Västmanland, Sweden.. TThhee tteesstt wwaass ppeerrffoorrmmeedd aatt ssuurrffaaccee lleevveell, abovee tthhee mmiinnee.. TTwwoo
ST7 machines were uusseedd aanndd ttwwoo ddiiffffeerreenntt bbuucckkeettss, GII and GIII (see FFiigguurree 13), was
mmoouunntteedd oonn tthhee mmaacchhiinneess.. The buckets had equal capacity, built for aa mmaatteerriiaall density
ooff 22..22ttoonn//mm³³ aanndd hhaadd aa vvoolluummee ooff 33..11mm³³.
Figure 13: SSTT77 GGIIIIII bbuucckkeett,, lleefftt,, aanndd SSTT77 GGIIII bbuucckkeett,, rriigghhtt..
A pile of greeyyssttoonnee,, wwiitthh aa density of approximately 1,6ton/m³, wwaass bbuuiilltt uupp ttoo uussee ffoorr
the loading test, Figure 1144. TThhee mmaatteerriiaall hhaadd aann aavveerraaggee ddiiaammeetteerr bbeettwweeeenn 1100 ttoo 1155ccmm..
Compared to a nnnooorrrmmmaaalll llloooaaadddiiinnnggg mmmaaattteeerrriiiaaalll iiinnn aaa mmmiiinnneee ttthhhiiisss wwwaaasss mmmooorrreee llloooooossseeennned up and the
density was lower, whhiicchh mmaakkee iitt an easy material to load.
Twoo ttyyppeess ooff llooaaddiinngg pprroocceedduurreess wweerree tteesstteedd.. OOnnee wwhheerree eeffffiicciieenntt llooaaddiinngg wwaass
ppprrriiiooorrriiitttiiizzzeeeddd iii...eee... aaa cccooommmbbbiiinnnaaatttiiiooonnn ooofff ssshhhooorrrttt llloooaaadddiiinnnggg cccyyycccllleee aaannnddd hhhiiiggghhh fffiiilllllliiinnnggg gggrrraaadddeee wwwaaasss ttthhheee
aim. In the second tesstt mmaaxxiimmuumm ffiilllliinngg wwaass pprriioorriittiizzeedd.. EEaacchh tteesstt wwaass ppeerrffoorrmmeedd 1100
times with botthh bbuucckkeettss.. TThhee rreessuulltt wwaass ggeenneerraatteedd bbyy mmeeaassuurriinngg tthhee wweeiigghhtt ooff tthhee llooaadd
aanndd tthhee ttiimmee ffoorr ppeerrffoorrmmiinngg aa loading cycle.. OOtthheerr ffaaccttoorrss wweerree aallssoo ttaakkeenn iinnttoo
account, tthhee ooppeerraattoorr''ss ooppiinion and the visual image of the test.
An additional tteesstt wwaass ppeerrffoorrmmeedd wwhheerree tthhee ppeenneettrraattiioonn distance of tthhee bbuucckkeett wwas
measured. The loader was ddrriivveenn iinnttoo tthhee ppiillee uunnttiill tthhee wwhheeeellss ssttaarrtteedd ttoo sslliipp,, wwiitthhoouutt
any tilting of the bucket
Figure 14: Practical test setup.
9 |
Chalmers University of Technology | 3.2 SSiimmuullaattiioonn MModel in EDEM
EDEM ((EEnnggiinneeeerriinngg DDiissccrreettee EElleemmeenntt MMeetthhoodd)) is a pprrooggrraamm ddeevveellooppeedd iinn SSccoottllaanndd
bbyy tthhee ccoommppaannyy DDEEMM SSoolluuttiioonnss.. The program uses DEM to simulate ggrraannuullaarr
materials in motion. EEDDEEMM mmooddeellss tthhee mmaatteerriiaall aass aa ggrroouupp ooff iinnddiivviidduuaallllyy ppaarrttiicclleess
that interact only at the iinntteerr-particle contact points. [11]
Figure 15: SSiimmuullaattiioonn sseettuupp iinn EEDDEEMM..
WWiitthh EEDDEEMM eexxaaccttllyy tthhee ssaammee llooaaddiinngg pprroocceedduurree iinn tthhee ssaamme rock pile ccaann bbee
pppeeerrrfffooorrrmmmeeeddd... DDDiiiffffffeeerrreeennnttt bbbuuuccckkkeeettt dddeeesssiiigggnnnsss cccaaannn ttthhheeennn bbbeee sssiiimmmuuulllaaattteeeddd aaannnddd rrreeesssuuullltttiiinnnggg dddaaatttaaa cccaaannn bbbeee
uusseedd ttoo ooppttiimmiizzee tthhee bbuucckkeett ddeessiiggnn.. TThhee ssiimmuullaattiioonn sseettuupp iinn EEDDEEMM iiss sseeeenn iinn Figure
15.
3.2.1 SSeelleeccttiinngg PPaarraammeetteerrss in EDEM
The materiall iiss ddeeffiinneedd bbyy aa number ooff ppaarraammeetteerrss iinn EEDDEEMM,, ddiivviiddeedd iinnttoo mmaatteerriiaall
and iinntteerraaccttiioonn pprrooppeerrttiieess.. TThhee iinntteerraaccttiioonn iiss ddeeffiinneedd as bbeettwweeeenn ppaarrttiicclleess aanndd bbeettwweeeenn
particles and bucket. TThhee mmaaiinn pprroobblleemm wwiitthh DDEEMM ssiimmuullaattiioonnss is ttoo ssppeecciiffyy tthhee
interaction properties so tthhee ppaarrttiicclleess bbeehhaavvee iinn tthhee ssaammee wwaayy aass iinn rreeaalliittyy [3].
TThhee sshhaappee ooff tthhee ppaarrttiicclleess is ddeeffiinneedd bbyy uussiinngg mmuullttiippllee sspphheerreess tthhaatt aarree bboonnddeedd
ttooggeetthheerr ttoo ggiivvee aa rreeaalliissttiicc outer shape. The particle seen in Figure 16 wwaass tthhee oonnee
used in simulations.
Figure 16: PPaarrttiiccllee uusseedd iinn ssiimmuullaattiioonnss..
SSiinnccee tthhee mmaatteerriiaall pprrooppeerrttiieess are different in every mine,, pprrooppeerrttiieess hhaavvee bbeeeenn sseelleecctteedd
ttoo rreepprreesseenntt aann aavveerraaggee mmaatteerriiaall.
Material specific pprrooppeerrttiieess ccaann bbee ttaakkeenn directly from literature,, wwhheerree aass tthe
inntteerraaccttiioonn ppaarraammeetteerrss aarree mmoorree ddiiffffiiccuulltt to specify. IInntteerraaccttiioonn ppaarraammeetteerrss aarree
confirmed bbyy vviissuuaallllyy vveerriiffyy that the simulation behaves as in reality.
Information aabboouutt ssiimmuullaattiioonn ppaarraammeetteerrss hhaass bbeeeenn ccoolllleecctteedd ffrroomm sseevveerraall ssoouurrcceess [10,
11, 14, 15] and values sshhoowwnn iinn Table 1 have been chosen ffoorr tthhee ssiimmuulation model.
10 |
Chalmers University of Technology | Table 1: Material parameters used in EDEM.
Material parameters
Material Rock Steel Ground
Poisson's ratio, ν 0.25 0.3 0.25
Shear modulus, G [MPa] 240 7 000 1
Density [kg/m³] 3000 8000 1000
Particle diameter [mm] 70-120 - -
Interaction parameters
Interaction materials rock-rock rock-steel rock-ground
Coefficient of restitution, COR 0.2 0.2 0,2
Static friction, µ 0.65 0.4 0.5
s
Rolling friction, µ 0.1 0.05 0.05
r
Material parameters:
Shear modulus, G
The value of shear modulus (stiffness) have a high impact on the calculation time for
the simulations in EDEM, a high shear modulus of the granular material generates a
very long calculation time. Also a high shear modulus can give unrealistic forces
when the material is exposed for compression forces [17]. To save calculation time
and avoid high force peaks a reduction of the shear modulus is preferred. As long as
the shear modulus is high enough to prevent any large interpenetration of particles the
simulation will generate a reliable result. [8]
24 000MPa is seen as an appropriate value for shear modulus of rock but in the
simulations a value of 240MPa has been used. According to DEM Solutions the shear
modulus can normally be reduced 100 times.
Density
In EDEM the density is specified for solid material, for Atlas Copcos buckets the
density are defined for broken material. Solid material expands approximately about
1.5-1.7 times when it becomes broken [16]. In the simulation a standard bucket that is
designed for a maximum density of 2.2ton/m³ is used and the bulk material have a
solid density of 3ton/m³ which generates a density of 1.875ton/m³ for broken material
with an expansion factor of 1.6.
Interaction parameters:
Coefficient of Restitution, COR
The COR value describes the bounciness of the particles or the inverse of the
damping. An appropriate COR value is 0.25 for rock-rock and 0.3 rock-steel. A lower
COR value gives lower force peaks and a more stable simulation [19], because of this
the COR value was slightly reduced to 0.2 for both rock-rock and rock-steel.
Static Friction, µ .Rolling Friction, µ
s r
The friction coefficients are exaggerated compared to the real values. This is because
the particles in EDEM have a smooth and rounded surface. A real stone has waviness
on the surfaces and also small irregularities. To get the almost round particles to
behave as realistic stones the friction coefficients are increased.
11 |
Chalmers University of Technology | The base plate exists to give space for bucket attachment. Some competitors has a
design without base plate. This gives a smooth inner design but the disadvantage is
that the rotation point is moved backwards and the centre of mass is moved forward.
3.2.4 EDEM Setup Description
Two different paths of movement in the loading cycle has been used. One path where
the bucket lip follow the ground and one path where the bucket path is 300mm above
the bottom plate, see Figure 24 and Figure 25.
Figure 24: High simulation cycle.
Figure 25: Low simulation cycle.
Unrealistic large compression forces appear in some simulations in the particles
between the bucket and the ground. This is a software related problem [17]. One of
the reasons to this is that no crushing of stones occurs in EDEM.
Two different simulation setups give a possibility to evaluate the reliability of the
simulation results. If the two simulations show converging results it is shown that the
simulations are generating reliable results.
An angled plate has been added in the corner of the box. The reason for this is to
reduce the total amount of particles, which in turn reduces the calculation time.
Another reason is to avoid unrealistic compression forces of the particles that got
stuck in the corner. See angled plate in Figure 25.
The width of the simulation box is 2800mm and the maximum width of the bucket is
2230mm. This gives a span of 285mm on each side of the bucket, see Figure 26. The
side boundaries are periodic, which means that a particle can pass through the wall at
one side and come out from the wall at the other side. This configuration seem to give
a realistic behaviour of the particles outside the side plates.
14 |
Chalmers University of Technology | Figure 26: The span between side boundary and bucket is 285mm.
One simulation cycle takes 2.5 hours to simulate.
3.2.5 Evaluation of Output from EDEM Simulations
The goal with simulations was to design a bucket that was easy to fill. In reality a
good measure of this is the amount of load in the bucket after a performed loading
cycle. In EDEM all buckets follow the same path which gives almost equal load,
instead differences are seen on the forces acting on the bucket.
The horizontal force is of large interest since it is in direct relation to the tractive
effort required for executing the loading cycle. A large vertical force increases the
front wheel grip and gives the machine a higher tractive effort. The forces are
presented in graphs, see Figure 27, and the average force over time was calculated and
used for comparison. The loading cycle is 11s but only the average force for the first
7s was used as a comparison value. During the first 7s the buckets penetration
capability is important, to use the result from the complete cycle add more insecurity
to the result.
Figure 27: Graph of horizontal force for GIII bucket in low simulation setup.
The torque around the buckets rotation point can be plotted, this torque is in relation
to the buckets breakout force.
15 |
Chalmers University of Technology | 4.2.10 Side Plate Angle
Side plate angle Horizontal Force
18
17
16
15
0 1 2 3
Side plate angle [deg]
Figure 62: Side plate angle.
Figure 63: Horizontal forces for side plate angle.
Horizontal force is lowest for side plates with an angle of 1 and 2deg.
The outer side of the side plates were flat for all buckets in this test. Bucket with no
angle of side plate but with a flat outer side had a 4.8% larger horizontal force
compared to a normal bucket. A normal side plate has a reinforcement plate along the
side profile.
4.2.11 GET
Figure 64: GET
Load 5065.5kg
Horizontal force per kg Load 14.3N/kg
Vertical force per kg Load -4.1N/kg
Horiz. force compared to normal -10.1%
Horiz. force comp. - high setup -7.0%
GET (Ground Engagement Tools) is a new lip under development at Atlas Copco.
The lip consists of five exchangeable plates made of high strength steel. As seen in
Figure 64 the lip has a thin edge compared to normal lip. The buckets loading
capacity is larger due to a longer lip. This lip shape generates the lowest horizontal
force of all buckets.
Atlas Copco's own field tests have shown an increased loading efficiency of about
10% for this bucket.
25
]gk/N[
daoL
rep
ecroF |
Chalmers University of Technology | 5 Conclusions
5.1 Practical Test
The worse result for the GII bucket is believed to be related to a couple of differences
in the design.
Figure 65: GII bucket with design differences.
The upper part of base plate interact more with the material. An edge from a
reinforcement plate in the bottom adds resistance to material flow. As shown in
chapter 4.2.5 the larger support plate increase horizontal force. A heel plate is
mounted underneath the GII bucket, not visible in Fel! Hittar inte referenskälla.,
this plate adds a friction force when it is in contact with material. The GIII bucket is
slightly wider than the GII bucket, this can make the GIII bucket easier to fill.
An factor that add some insurance to the test was the more worn tires on the GII
machine, which could give the machine a lower tractive force.
5.2 Investigating the Influence of Design Parameters
5.2.1 GII Bucket
As the practical test showed GII bucket is more difficult to fill also in the simulation
setup. See chapter 5.1 above.
5.2.2 Lip Profile
Overall the differences in horizontal forces are small and it difficult to draw
conclusions from the result. The hypothesis was that a more edgy shape were going to
generate lower forces, an explanation to that this wasn't seen more clearly can be
related to that small angles are used, larger angles will decrease usability.
V/straight lip did have a lower horizontal force than the normal GIII Bucket, for both
high and low cycle. This small change would probably also decrease wear on lip
corners which is critical.
5.2.3 Base Plate Profile
A bucket without base plate but with the wrapper moved inwards to keep constant
volume,
Figure 39, did not decrease the horizontal force. This even increased horizontal force
by 1.7%. To verify the result a normal bucket without base plate was tested and the
26 |
Chalmers University of Technology | forces for this bucket did decrease. From the result of this test there is no reduction in
horizontal force when replacing base plate by moving the wrapper inwards.
Compared to a bucket without base plate but with constant volume, the base plate get
into contact with the material earlier but the wrapper plate in the pockets get into
contact with the material later. This generates only a small difference between the two
buckets.
5.2.4 Side Plate Profile
The lower part of the side plate has most interaction with loading material and the
design of this part is there for important. Side plate profile 2 has a sharp shape in the
lower part and this is the reason to why the horizontal force is lower for this profile.
The bad result for profile 4 partly depends on the reduction in loaded material for that
design.
From the test with thinner edge thickness of the lip it was shown that a thinner edge
largely reduces the forces. This can also be applied on the side profile edges, but this
has not been confirmed by tests.
5.2.5 Support Plate
The result shown for different support plate design was logical, the forces increase
with larger support plates. The difference of 2.7% without support plate is that large
that a design without support plate is motivated to prioritize.
By having a smoother angle in the end closest to the lip, forces may be reduced
without deleting the support plates
5.2.6 Edge Thickness
The edge thickness has a large influence on horizontal force; a thinner edge generates
a smaller horizontal force. From the test this is seen as the most important design
parameter.
5.2.7 Lip Chamfer Angle
A logical result for the chamfer angle is that the horizontal force decreased for lower
angles, which give a sharper edge. For an edge thickness of 8mm a chamfer angle of
20deg had lowest horizontal forces for both high and low setup. This is probably
related to simulation variation and it is risky to conclude that a chamfer angle of
20deg is better than 10deg. But it could at least be seen that the difference between
10deg and 20deg has a negligible effect on horizontal force.
For a lip with 25mm edge thickness the chamfer area is small and the chamfer angle
has a low influence on the forces.
5.2.8 Lip Angle
A horizontal lip was shown to generate lowest horizontal force. The hypothesis was
that an lip angled downwards were going to increase vertical force and improve tire
grip, this wasn't shown in neither high or low simulation setup.
5.2.9 Attack Angle
In the result for different attack angles no readable trend was distinguished. The
hypothesis was that a large angle was going to generate higher vertical forces but that
wasn't shown. The result did coincide with the study performed by Maciejewski [7],
27 |
Chalmers University of Technology | see Figure 10 and Figure 11. For a rotational loading motion the attack angle has a
small influence on forces. Without rotation a high attack angle will increase the
horizontal force.
5.2.10 Side Plate Angle
An side plate angle of 1 and 2 deg showed lowest forces in this test. But all buckets in
the side angle test showed a higher horizontal force than the standard bucket. This was
because the buckets used in the side angle test had a flat outer surface. The normal
GIII bucket has reinforcement plates along the edge which adds a gap of 20mm to the
rear part of the side plate, see Figure 66. This gap creates the same clearance as an
angled plate, but without adding an angle inside the bucket which decrease the
volume and increase friction forces.
Figure 66: Gap from reinforcement plate highlighted.
5.2.11 GET
The GET Bucket showed lowest horizontal force of all buckets in the test. The
explanation to this is seen clearly when individual design parameters on the GET
bucket is studied. A thin edge thickness has been shown to be the individual most
important parameter and on the GET bucket the edge thickness is 7mm. The side
plates continues on the lip sides and has a sharp angle, this design is shown to be
efficient in
Figure 42.
6 Discussion
6.1 Reflections on EDEM
From the beginning a realistic material was the goal to generate. In the first simulation
setup the particles flow path was the dominating factor on the result. This gave a large
result variation and it was hard to distinguish how individual design parameters did
influence the result. The particles used had 5-spheres and an edgy shape and also a
large size variation. To decrease the influence from the particle flow path on the result
a more smooth material was created. The particles shape was changed to the one
shown in Figure 16. This particle only consist of 3-spheres bonded close together, the
particles where also smaller with a lower size variation.
28 |
Chalmers University of Technology | Even if this new material wasn't the most realistic one it was able to produce a
readable result. The same problem appears in reality when buckets are tested, the
particles flow path will have a large influence on the result.
6.2 Recommendations
To increase the reliability in the simulations more accurate settings are recommended
to use, but this will largely increase simulation time. By simulating with a shorter
timestep the simulations will be more reliable. Multiple runs with the same bucket
will generate the same result and force peaks will be reduced. To compensate for that
different particle flow paths not is generated with this setup the recommendation is to
create multiple piles with the same kind of particles and outer size, but with an
internal difference of the particles placement in the pile. By simulating each bucket
design in 10 different piles with a shorter timestep for both low and high setup, the
accuracy and reliability in the simulation would increase.
Other improvements to make are to improve simulation material and build a model of
the motion in ADAMS to get a more realistic movement of the bucket.
For future simulations larger design changes are recommended to study and different
bucket sizes.
29 |
Chalmers University of Technology | Figure 69: Breakout force description.
Engine Power 193hp
Atlas Copcos loaders have four wheel drive. The engine power is divided between
the drive line and the hydraulic pump. This gives a reduced tractive effort when the
bucket is in motion by the hydraulics, e.g. when it is tilted.
Tractive Effort 150kN
The maximal tractive effort for the ST7 machine, measured on concrete floor with full
bucket, is 150kN, at 2200rpm [19]. In real tests a rough estimation of the maximal
tractive effort during the loading cycle was 120kN (1800rpm). For relation between
tractive effort and rpm, see Figure 70. An exact tractive effort is hard to calculate
since it is dependent of the grip of the wheels, the wheel diameter and the amount of
the engine power used for the hydraulic pressure.
ST7 Tractive Effort
160
140
120
100
80 kN
60
40
20
0
1500 1600 1700 1800 1900 2000 2100 2200
Figure 70: Tractive effort in kN related to rpm for ST7.
A2 Mining Process
The most appropriate mining process to use in a mine is dependent of the shape of the
orebody. Since the shape of orebodies never looks exactly the same, the most
effective mining method is individual for each mine.
The different mining methods can be divided into two main groups based on the dip
of the orebody, those two groups are described on next page. There exist also several
additional variants of both methods.
33 |
Chalmers University of Technology | Abstract
The rotary kiln process for iron ore pelletizing is one of the main methods to upgrade
crude iron ore. The mining company LKAB runs four Grate-Kiln production sites in
northern Sweden, where a grate and a rotary kiln are combined to thermally indurate
the iron ore pellets. The high temperature needed for the process is provided by
combustion of coal with a high amount of extremely preheated air, what creates an
atmosphere inside the furnace of which the present theoretical understanding is low.
So far, the high amount of excess air (λ = 5-6) made standard NO mitigation
x
strategies in the rotary kiln unsuitable. Environmental issues and need for fuel
flexibility has enticed LKAB to carry out experimental campaigns in a test facility to
characterize the combustion process. The results of the experimental campaign of
2013 and previous campaigns are reviewed in the present work. The measurement
results were evaluated through gas-phase chemistry modelling with a detailed
chemical reaction scheme. The evaluation of the 2013 experimental campaign
suggests measurement problems of the temperature and the combustion behaviour
inside the test furnace. Gas and oil flames showed to combust almost instantaneously
within the first centimetres after the burner. Biomass and coal combusted significantly
slower, but also had the highest reaction intensity close to the burner inlet. Measured
exhaust NO levels could not be achieved in the model with the measured temperature
x
and modelling results proposed peak temperatures more than 500°C above the
measured temperature inside the kiln for oil and gas combustion. For oil and gas it
was found that thermal-NO is the major contributor to the NO formation inside the
x x
pilot scale kiln. The lower NO emissions for coal and biomass were explained by
x
lower temperatures inside the kiln and the relation fuel-N and thermal-N. Modelling
fuel bound nitrogen for the solid fuels showed that the NO is formed there in similar
x
amounts via both the fuel-NO and thermal-NO formation route. By comparing the
high NO levels of the experimental to the lower levels in the full scale plant, it was
x
concluded that the NO formation differs between them, as in the experimental one
x
significantly higher temperature prevail for all fuels. This showed that this facility
does not resemble the actual kiln adequately.
Key words: NO formation, Iron ore pelletizing, high temperature combustion, gas-
x
phase chemistry combustion modelling, gas, oil and coal combustion
I |
Chalmers University of Technology | Preface
This is the public version of the work on NO formation in a rotary kiln test facility.
x
In this study modelling work of the NO formation in the facility is carried out. The
x
modelling is based on confidential experimental data, which is not presented in this
work. The thesis is part of a research collaboration of Chalmers University of
Technology and the iron ore company LKAB in the field of emission formation. The
experimental campaign was conducted in autumn 2013 at LKABs test facility in
Luleå, Sweden. The project is carried out at the Department of Energy and
Environment, Energy Technology, Chalmers University of Technology, Sweden.
The modelling part of the thesis was written with guidance of Assistant Professor
Fredrik Normann as a supervisor and main source of information about the NO
x
formation and its modelling and Associate Professor Klas Andersson as Examiner and
extended knowledge about combustion phenomena. Daniel Bäckstrom, a PhD student
at Chalmers, was assisting with information about the experimental campaign and his
observations of radiative heat transfer inside the experimental kiln. The experiments
were carried out by staff of the experimental facility of Swerea MEFOS under
supervision of Christian Fredriksson.
Finally, I would like to thank these people for their steady support during the thesis
and the availability and willingness to discuss problems occurring in the course of the
work.
Göteborg, June 2014
Johannes Haus
IV |
Chalmers University of Technology | 1. Introduction
1.1. Background
The demand for iron ore has become an indicator for the economic development of
countries around the globe. Especially the developing countries’ demand for iron ore
continues unabated and the global steel production hits new records annually [1].
Thus, it is foreseen that the iron ore mining and processing in Kiruna, northern
Sweden, will be economically feasible for the coming decades.
The state owned company Luossavaara-Kiirunavaara Aktiebolag (LKAB) is
processing the iron ore after extraction on-site in the Kiruna area to iron ore pellets.
The thermal part of the process, where iron ore is sintered to pellets, requires very
high temperatures and has to be carried out in special kilns. One of the equipment
options is a combination of a straight grate and a rotary kiln, where the pellets are
heated, dried and oxidized by extremely preheated air and the combustion of coal.
The emission of carbon dioxide (CO ) from the use of coal in the iron ore processing,
2
because of its contribution to global warming, is of current concern for the iron-
producing industry. This is why LKAB and Chalmers University of Technology have
an on-going research project that aims to increase the fuel flexibility in the iron
production to decrease fuel costs and emissions of CO . The choice of fuel has a
2
major impact on the combustion situation and, with it, the formation of other
important pollutants, like nitrogen oxides (NO ).
x
The emission of NO is a source of acid rain and smog [2], which is the reason why
x
the formation of nitrogen oxides is a focus point during the research in LKAB’s rotary
kilns. In this kind of large scale facility, both primary measures, like adjustments in
the combustion process and burner configuration, and secondary measures, like
SNCRs and SCRs facilities, can be applied to decrease the NO exhaust to desired
x
levels. Primary measures have usually an economical advantage over secondary ones
as they do not need extensive new equipment and/or have no additional consumption
of substances.
The burner configuration in a rotary kiln for iron ore production is relatively unique,
with extreme air excess and air preheating, and the fundamental knowledge about its
combustion behaviour is therefore limited. As NO formation depends heavily on the
x
combustion situation, this fundamental knowledge about the reaction conditions is
needed to optimize the operation of the LKAB rotary kiln for low NO levels, without
x
compromising the conditions required for pellet production.
Within the next years, the combustion and heat transfer conditions related to choice of
fuel in rotary kilns is investigated during the research cooperation of LKAB and
Chalmers University of Technology; processes which will be influenced by the
combustion conditions and the burner arrangement. The iron making process itself
may also influence the emission formation and the radiative heat transfer conditions
by means of for example heat release in the production process as mentioned above.
For a better understanding of the combustion inside kilns experiments are performed
and evaluated and models of the operation are derived. During fall 2013 a first
measurement campaign dedicated to fuel flexibility was performed in LKABs
400 kW experimental test facility that resembles the conditions in the full scale
th
rotary kiln, called KK2. The campaign generated experimental data on NO formation
x
and flame radiation from different fuels, including coal, gas and biomass. Together
1 |
Chalmers University of Technology | with field data from LKAB’s production facilities from previous measurement
campaigns during the years 2008-2012 these measurements are the basis of the
evaluation of combustion, emission formation and heat transfer issues of interest in
the rotary kiln.
1.2. Aim and scope
The purpose of this work is to characterize the NO formation in rotary kilns by
x
evaluating the experimental data from the measurement campaign in 2013 in the
rotary kiln test facility, the Experimental Combustion Furnace ECF.
Furthermore modelling is carried out to give a deeper insight in the combustion
chemistry of the fuels investigated. The aim is to map the NO formation during the
x
combustion with different fuels and temperature conditions in the rotary kiln. The
results from the ECF are applied to discuss also the emission formation in the full
scale unit.
With the help of the experimental evaluation and the modelling work, focus areas and
improvements should be proposed for the upcoming experimental campaign during
2014 as well as for the continued modelling work of the process.
2 |
Chalmers University of Technology | 2. Iron ore processing
There are several alternatives and routes to process and use mined iron ore and which
route is chosen depends on a variety of factors, for example the needed quality, the
grade of the mined iron ore or its later purpose in the steel industry. As indicated in
Figure 2-1, three main refined products can be found in the iron ore industry, the
rather unprocessed lump ore, the iron ore sinter and the iron ore pellets.
Despite its long history, the iron ore pelletizing did not become commercial before the
Second World War, The development of the pelletizing process was mainly driven by
the desire of using lower grade iron ore resources and the pelletizing enriches the iron
content to the requirements in blast furnaces [3].
The iron ore that is mined by LKAB in the world’s two largest underground iron ore
mines in Kiruna and Malmberget [4] is mostly processed to pellets, despite the fact
that the iron ore from the iron ore body in Kiruna has unlike most other resources in
the world high iron ore content [5]. The reason for still using this more intensive
processing route is that it is much easier to transport the iron after the pelletizing
process compared to shipping the lump iron directly. Furthermore iron ore pellets
have proven to be fortunate in the steelmaking process due to increased permeability
for air compared to normal sinter. The possible pathways of iron or processing are
shown in Figure 2-1.
Iron ore Preparation
Beneficiation Low grade High grade
Crushing
Tailing Concentrate
Screening
Additives Grinding Additives
Pelletizing Sintering
Oxide pellets Sized lump Sinter
Direct reduction Blast furnace
Gas Iron
Sponge iron Pig iron
Figure 2-1: Iron ore processing pathways [3]. Highlighted green are the processes taking
place at LKAB sites in Kiruna described in this work. Also sintering of the fines takes place in
Kiruna, but on a much smaller scale (blue).
Presented in the Figure 2-1, the iron ore from Kiruna is crushed and sorted directly
after mining. Then the iron ore is concentrated and phosphorus is removed before fed
into the pelletizing drums. Into the drums additives, like binders and substances for
3 |
Chalmers University of Technology | the later processing, are added. In this process the so called green pellets are formed.
The last step of processing is the thermal treatment of the green pellets, where the
green pellets are fed towards a combination of a straight grate and rotary kiln sintering
unit, together called the Grate-Kiln thermal process. In this process at first the ore is
preheated and parts of it oxidized in the travelling grate and then moving further to the
rotary kiln where the different layers of pellets are mixed and nearly completely
oxidized at very high temperatures. This work focused on the combustion inside the
Grate-Kiln processing step, which is further described below.
As regards the completeness, it has to be mentioned that small parts of the iron ore
mined in Malmberget is processed to fines shown by the blue boxes in Figure 2-1.
The other iron ore from Malmberget is also transformed into pellets but in a single
straight grate process without a rotary kiln. The processed iron pellets and fines are
transported to the harbours in Narvik, Norway and Lulea, Sweden, where it is shipped
to LKAB’s customers around the world [6].
2.1. Grate-Kiln process
The Grate-Kiln process is illustrated in Figure 2-2. The green pellets enter the drying
stage and pass through the processes of drying and dehydration on the travelling grate.
The heat is added by both updraft and downdraft air. The heaters are arranged to
facilitate the dehydration by dispatching the evaporated water and minimize thermal
exposure to the travelling grate. The oxidation starts before the rotary kiln, but is
distributed unevenly within the pellet layer. The upper pellets are oxidized to a higher
extend than the pellets lying closer to the grate.
The green pellets have to be thermally processed to withstand the forces that will act
on them during transportation. The resilience of the pellets is a key design factor for
both the green and the final pellets, i.e. the height of pellets bed entering the travelling
grate is limited so that the weight of the upper layers does not crush the pellets below.
But too strong green pellets are disadvantageous during oxidization and therefore the
green pellet quality is steadily measured.
The uneven distribution of oxidization leads to the need of mixing the pellet layers to
provide a homogenous quality of the pellets. This mixing can be achieved in the
rotating kiln. Inside the kiln the pellets are heated up to the necessary 1250°C to
1300°C for oxidization. Fuel is combusted via an annular burner and with extremely
preheated air of 1150°C that is inserted above and below the burner to reach the
required temperatures in the pellet bed [7]. To provide more than 1250°C in the bed, it
can be assumed that the combustion zone reaches a significantly higher temperature.
The preheated air stems from cooling demand of the finished pellets. It is fed
completely back into the system and corresponds to around six times the air amount
that would be actually needed for complete combustion of the fuel. This environment
of extreme oxygen abundance is needed to convert the pellets during their oxidation
processes. The oxidation of the pellets itself provides around 60% of the total energy
demand.
Exiting the kiln, the pellets just fall down into the circular cooler where they are
exposed to the cooling air, but the oxidation continues even in the cooler. After
enormous amounts of air have cooled them down sufficiently to around 200°C, they
can be discharged safely [8]. The cooling air is fed as combustion air to the burner but
also to the preheating of the pellets.
4 |
Chalmers University of Technology | 3. Theory
This work mainly focuses on the evaluation of the NO formation inside the
x
Experimental Combustion Furnace. NO can be traced mainly to three different
x
formation mechanisms that will be described within this chapter. Afterwards the
background of the NO modelling with a detailed chemical reaction scheme is
x
explained.
3.1. NO formation
x
During the combustion process pollutants like sulphur dioxide SO and nitrogen
2
oxides, like NO and N O may form. The formation of these pollutants is undebated a
2
cause for smog and ozone in cities and furthermore, nitrogen oxides can be a reason
for acid rain [2].
The formation of nitrogen oxides may be divided into three main mechanisms. The
first one is the release of nitrogen from the fuel itself, thus called fuel-N. The second
mechanism, thermal-NO formation, as the name is implying is triggered by very high
temperatures that cause nitrogen in the air to react. The last mechanism often
described is the prompt-NO, which includes the formation of NO from the nitrogen
x
in the air by the attack of hydrocarbon radicals. Warnatz et al. describe also a fourth
mechanism in lean combustion conditions similar to the thermal nitrogen oxide
formation, the N O mechanism [2]. There additionally other molecules are present
2
why it is classified as a so called third body reaction.
FUEL-NO
For the generation of NO from fuel, there must be nitrogen present in the fuel.
Therefore this aspect can be disregarded for the fuel oil and gas and is mainly
important for the NO formation during solid fuel combustion.
x
Fuel-N is released during the devolatilization mostly in the form of NH and HCN [9]
3
that will react further to NO or N depending on the combustion situation. The fuel-N
2
released during devolatilization is called volatile-N. The nitrogen remaining in the
particles in the coal is called char-N and can also react towards NO . A simplified
x
pathway for fuel-NO is thereby shown in Figure 3-1. Whether the intermediate
species react further to NO or N is thereby dependent on different parameters, like
2
oxygen presence and the temperature [10].
NO
HCN, HNCO, NCO,
Fuel nitrogen NH3, NH2, NH, N
CN
N2
Figure 3-1: Pathways for fuel-N from Gardiner [11]. The final conversion to N2 or NO is dependent
on the combustion environment inside the furnace.
7 |
Chalmers University of Technology | THERMAL-NO
Reviewing different literature about thermal-NO formation, it is steadily referred to
the extended Zeldovich mechanism from 1946 [12]. As stated there, three main
reactions are responsible for the formation of NO emissions. The reactions show the
x
importance of oxygen, which has to be present to form NO emissions.
x
O + N ↔ NO + N (1)
2
N + O ↔ NO + O (2)
2
N + OH ↔ NO + H (3)
The reactions can occur in both directions and are mainly dependent on the
concentrations of O and the temperature. In the literature it is mentioned mostly that
2
below 1700-1800 K this formation mechanism is insignificant. Hence, strategies to
avoid thermal-NO focus mainly to reduce the O concentration and diminish
2
temperature peaks over 1700K [11].
PROMPT-NO
The last of the three most important NO formation pathways was first discovered by
x
Fenimore [13], who tried to find an explanation of the formation of NO which could
x
not be explained by the thermal-NO mechanism. He found out that NO is formed
x x
early during combustion when the necessary temperatures for thermal-NO are not
met. The reason for this formation is the attack of hydrocarbon radicals on nitrogen of
the combustion air. The main reaction for prompt-NO formation is given in Gardiner
[11] as:
CH + N ↔ HCN + N (4)
2
The same intermediate substance like in fuel-NO, the hydrogen cyanide, is formed as
well as a nitrogen radical, that can both react further to NO as indicated in Figure 3-1.
3.2. NO Modelling
x
Reaction mechanism
The main reactions of NO formation were described, but it has to be noticed, that the
x
global combustion and formation mechanisms take place via a vast amount of
chemical reactions and intermediate products. This is the reason why detailed
chemical reaction schemes are developed in various research groups that cover the
NO formation in more detail than the global reactions from the chapter before.
x
The set of equations used in this work was latest revised by Mendiara and Glarborg
[14] and goes back to the work of Miller in 1981 et al. with the title "A Chemical
Kinetic Model for the Selective Reduction of Ammonia" [15]. The earlier mechanism
was extended and validated in different environments and consists now of over 700
chemical reactions. To mention some research work that was done to establish this set
of equations Skreiberg et al. [16], can be named, who examined the reaction path of
ammonia or Dagaut et al. [17], who checked the hydrogen cyanide chemistry for solid
fuel combustion. A more detailed description and evolution for the used mechanism
can be found in various theses within the department [18].
Reactor model
The set of equations is then implemented into a reactor model that suits best to the
conditions in the furnace. Two simplistic reactor models are commonly used in the
8 |
Chalmers University of Technology | field of combustion engineering, the 0-dimensional Continuously Stirred Reactor
(CSR) and the 1-dimensional Plug Flow Reactor (PFR). The CSR model is not
applicable here, as it approximates the chemical reactions over a control volume at a
steady temperature, whereas a detailed analysis of the chemistry at different
temperatures inside the reactor is the main aim of this work. This is why a PFR model
is applied that can “follow the progress of combustion in a system” [19]. In the PFR
shown in Figure 3-2, the chemical reactions are calculated at different distances x
from the reactor inlet.
C (x)
XY
C (0) C (L)
XY XY
x
0 L
Figure 3-2: Plug flow reactor model [19].
Combustion modelling
The focus in this work is the formation of NO in the furnace, so the combustion itself
x
is modelled with methane as fuel. The simplistic stoichiometric reaction therefore is:
CH + 2 O → CO + 2 H O (5)
4 2 2 2
Taking all species, that is the nitrogen and excess oxygen in the combustion air, into
account that will be present during combustion, the overall reaction is:
CH + 2 λ (O + 3.76 N ) → CO + 2 H O + 2 (λ – 1) O + 7.52 λ N (6)
4 2 2 2 2 2 2
It was described that enormous amounts of air enters the Grate-Kiln because of the
recuperation of the heat in the cooler. With that air a high value of lambda is reached.
Looking at reaction (6) it becomes clear that with the high values of lambda, the
molar amount of fuel becomes small compared to nitrogen and oxygen and in this
way the nitrogen chemistry becomes predominant.
9 |
Chalmers University of Technology | 4. Methodology
This work is divided into two parts. The first part is an evaluation of an experimental
campaign denoted towards NO formation, but is not included in this paper. This part
x
also includes a review of key findings from earlier experimental campaigns. The
experimental setup is explained within the methodology chapter. This evaluation and
its results will be the framework for the second part, the modelling work, which tries
to explain in detail the NO formation and its underlying chemistry inside the ECF.
x
4.1. Previous experimental campaigns
Before the 2013 measurement campaign, other measurement campaigns solely
denoted to NO reduction techniques were carried out in the time 2007-2012. The
x
experimental campaigns were conducted in different combustion furnaces, on lab
scale, in the pilot scale furnace ECF as well as the full scale rotary kiln KK2. The
main findings of these campaigns are included to support the evaluation of the recent
campaign.
4.2. General experimental setup
To investigate combustion and reaction behaviour during Grate-Kiln combustion in
the pelletizing plant, a downscaled test unit resembling the kiln was deployed in the
year 2007. This experimental furnace is called Experimental Combustion Furnace
(ECF) and is placed in Luleå, Sweden. The furnace has a total length of around 12
meters and 0.8 m inner diameter. The main combustion air is fed above and below the
burner as showed in Figure 4-1. The fuel is inserted with a central mounted burner.
In difference to the real KK2, the ECF is not rotating to facilitate the measurements
that can be carried out at different ports along side of the reactor. Measurement tools,
like temperature probes and a gas composition analyser can be inserted directly into
the combustion. It is thus possible to create a radial profile inside the furnace of the
gas composition and temperature.
The inlet conditions (fuel, air amount and temperature) were measured as well as the
outlet condition and composition of the gases leaving the furnace after 12 m. To
create a profile from the temperature and the species, data points along the furnace
cross section were recorded.
1700
1200
700
200
AIR Port 1 Port 2 Port 3 Port 4
Burner
AIR
Figure 4-1: Outside side view of the ECF. The measurement ports and their distance to the centrally
mounted burner are indicated. Below and above the burner, the inlets for the secondary air are
sketched.
11 |
Chalmers University of Technology | 4.3. Measurement technique
For the measurements at the ports shown in Figure 4-1, a measurement probe
presented in Figure 4-2 was inserted into the ports. Starting at 0 cm from the wall the
probe moved towards the middle at 40 cm and the last data was taken at the opposite
wall at 80 cm. The relevant species CO, CO , O , NO and SO plus the temperatures
2 2 x 2
were measured inside the furnace and at the furnace outlet, to create a radial profile
inside the furnace and also get the outlet condition of the values. These measurements
are the basis for an interpretation of combustion chemistry during the campaign.
A crucial factor in conduction temperature measurements with the suction probe is
thereby the gas inlet velocity. Heitor and Moreira describe effects that can arise when
suction pyrometers are not used properly and the probe inlet velocity is too low [20].
In their work, for a real flame temperature of 1000°C a 230°C lower temperature was
measured due a faulty inlet velocity.
Measurements of the radiative heat transfer, which took place at the same time and is
not part of this thesis, showed that the radiation intensity points at temperature peaks
above 1800°C for the solid as well as the liquid and gaseous fuels.
0
8
e
b 0
o 6
r
p
t
n e m
e
r
u
0 4
0
s a
e
M
2
0
Figure 4-2: Sampling with a measurement probe inside the furnace.
4.4. Experimental campaign 2013
The 2013 campaign was dedicated towards different research needs with respect to
fuel flexibility. Two aspects that were examined during the campaign and are not part
of this thesis are the heat radiation of different fuels inside the combustion furnace
and the examination of slag built-up inside the ECF. The main topic of this thesis is
dedicated to the measurements of the formed nitrogen oxides during the combustion
inside the furnace. The basic experimental order was the following:
a) A fuel was introduced to the furnace and fired for about 6 hours to reach a
steady combustion state.
b) Then the composition and temperature of the gases and inside the furnace was
measured at the different ports.
c) After that, the radiative heat transfer was measured inside the furnace at
different ports.
d) The formation of slag was examined at last.
e) The fuel was changed and process a-d) were started again.
12 |
Chalmers University of Technology | For the coal measurements, all the measurements were carried out at port 1, 2 and 4 as
presented in Figure 4-1. For the natural gas and oil combustion, only port 2 and 4
were used. Furthermore at port 1 only positions inside and close to the flame were
measured.
Concluding, in the various research projects the same fuels and fuel blends were
examined for the different research purposes. A complete list of the examined fuels is
given in Table 4-1, where also the different burners used in the experiments are
indicated. During the coal firing and the first biomass co-firing, with the letters C)-E)
in the Table, the reference burner was used, which injected a pure stream of coal or a
premix of the fuels. For the other co-firings F)-K), a burner was used that injected the
fuels separately, originally called “kombibrännare” or combined burner. All the solid
fuels were milled before combustion. The evolving biomass particles were much
coarser than the milled coal.
Table 4-1: The fuels used in the experimental campaign are shown in chronological order of the
experiments. The percentage of the composition represents the part of the fuel effect in kW.
Type Name Composition Burner
(fuel effect)
A
Oil Eldningsolja 5 - 100% Oil Oil burner
Heavy fuel oil
B
Gas Natural Gas 100% NG Gas burner
C
Coal Coal 1 100% Coal Reference axial
burner
D
Coal Coal 2 100% Coal Reference axial
burner
E
Coal-Biomass-Co- Coal 1 and 70% Coal/30% Reference axial
Firing Biomass 1 Biomass burner
F
Coal-Biomass-Co- Coal 1 and 90% Coal/10% Turbojet
Firing Biomass 1 Biomass Kombiburner
G
Coal-Biomass-Co- Coal 1 and 70% Coal/30% Turbojet
Firing Biomass 1 Biomass Kombiburner
H
Coal-Biomass-Co- Coal 1 and 90% Coal/10% Turbojet
Firing Biomass 2 Biomass Kombiburner
I
Coal-Biomass-Co- Coal 1 and 70% Coal/30% Turbojet
Firing Biomass 2 Biomass Kombiburner
J
Coal-Biomass-Co- Coal 1 and 90% Coal/10% Turbojet
Firing Biomass 3 Biomass Kombiburner
K
Coal-Biomass-Co- Coal 1 and 70% Coal/30% Turbojet
Firing Biomass 3 Biomass Kombiburner
13 |
Chalmers University of Technology | 4.5. CHEMKIN simulations
Modelling of the gas-phase NO formation inside the furnace was used to interpret the
x
experiments and analyse the chemical reactions taking place as well as the overall
combustion situation. The experimental data from the ECF was utilized as input data
for the modelling work. The furnace was modelled as a plug flow reactor (PFR)
shown schematically in Figure 4-3. The combustion and nitrogen chemistry was
described by the detailed chemical reaction scheme proposed by Mendiara and
Glarborg [14].
With the help of a simplified mixing and temperature data, the detailed reaction
scheme calculates the underlying chemical reactions inside the ECF. The furnace
demonstrated in Figure 4-3 could be interpreted as a reacting zone that is extending
from a small area where only parts of the injected air is mixed with the fuel and later
the combustion gases, towards the complete flow towards the outlet. The numbers
indicate the measurements, which were used as input data. This approach has been
previously used by for example Normann [18].
Fuel
3 4
Oxidizer
Figure 4-3: Experimental data at different positions 1, 2, 3 and 4 to be used for developing a mixing
profile of fuel and oxidizer. The white area is modelled whereas the oxidizer enters the modelled zone
during the course of the furnace.
Mixing conditions for the simulations
The idea of using a mixing profile is to implement effects of mixing without knowing
the detailed mixing behaviour and fluid flow, what would require comprehensive
analysis of the flow with CFD tools. From the experimental results, a main
combustion zone where the combustion species CO, O CO and NO are present in
2 2 x
higher concentrations. Also a higher temperature than in the outer zone is found there.
Hence, the main combustion zone and the measured species O , CO and NO inside
2 2 x
are basis of the development of the mixing profile. For the modelling of natural gas
and oil combustion, the mixing profile shown in Figure 4-4 was used. This profile was
derived to match the measured species from the experimental data. Delaying effects
during solid fuel combustion and with it the fuel-N release needed different fuel and
oxidizer mixing profiles described in the section further below and in Figure 4-9.
15 |
Chalmers University of Technology | Figure 4-4: Mixing profile used for modelling based on experimental species concentrations during oil
and gas combustion. The combustion air amount totals 740 g/s.
Temperature data for the simulations
After having developed the mixing behaviour inside the reactor, temperature
information has to be fed into the model to receive accurate simulation results of the
NO formation inside the furnace. Four approaches to determine the temperature
x
profile of the ECF has been used throughout the investigation.
A) Measured temperature profiles
The temperature profiles measured during the experiments were directly used in the
chemical reaction modelling. The measured profiles are shown in Figure 4-5. Also the
coal and co-firing profiles where examined to clarify whether the reactions follow a
simplistic gas-phase modelling approach.
Oil
Gas
Coal 1
Coal 1 and Biomass 1
Figure 4-5: Measured temperature profiles used for the simulations and validation of the measurement
results.
B) Adiabatic combustion simulations
An adiabatic combustion temperature profile was used as a theoretical maximum and
to create a picture of a possible shape of the temperature distribution inside the
furnace. For this simulation, only the inlet temperatures and a mixing behavior of fuel
and oxidizer have to be inserted and from there the resulting temperature conditions
can be calculated.
16 |
Chalmers University of Technology | C) Energy balances
Another temperature profile was determined combining the adiabatic results and
energy observations. This approach included an energy balance over the furnace to
see how much energy is lost to the walls. Now, similar to adiabatic combustion
simulation, a simulation can be set up, but with taking the losses from the combustion
zone to the walls into account.
The energy balance was based on the measured temperatures to average the enthalpy
of the gases at the ports. Even though the temperatures at the ports might not be
totally accurate, the inlet and outlet enthalpy levels are specified accurately and hence
the overall heat losses will give a realistic picture of the situation. Over the whole
furnace the composition was measured on a dry basis and hence the total enthalpies
were calculated on a basis of N O and CO [21].
2, 2 2
In Figure 4-6, a general overview with all the energy related processes is shown. Air
and fuel are injected with certain energy to the furnace and the fuel is combusted to
add additional heat to the inlet enthalpy level. With the temperature and species data
from the flue gases, the enthalpy levels at different distances from the burner can be
calculated. The differences between the ports can be assumed to be convective and
radiative heat losses. The amount of energy lost between two measurement locations
is denoted as ΔH in the Figure 4-6 and is the basis of the applied heat losses in the
simulations.
Figure 4-6: Schematic view of the furnace with the inlet streams and the main phenomena having an
effect on the energy level inside.
D) Fitting of the temperature profile to the measurement data
The final temperature profile was derived by fitting the calculated NO emissions to
x
the measured. An idealized temperature profile shown in Figure 4-7 was used to cover
the temperature situation in the furnace. The peak temperature and its position were
changed in a way that the detailed chemical reaction scheme results in the measured
NO levels.
x
The fitting will help to interpret the validity of the measurement results and give
insight how the experimental setup can be improved for the upcoming campaigns.
17 |
Chalmers University of Technology | Peak position
Figure 4-7: Idealized temperature profile shape used for fitting and sensitivity analysis.
Sensitivity analyses of the gas-phase NO formation inside the ECF
x
Having established a model that covers the measured NO formation inside the
x
furnace, a sensitivity analysis is carried out on how the NO formation varies by
x
changing the parameters shown in Table 4-3. The examined parameters evolved
during the previous modelling work, where the results seemed strongly depended on
them.
First an analysis of the influence of the peak temperature and the position of the peak
is conducted. Their values are changed separately in an idealized temperature profile
shown in Figure 4-7.
Table 4-3: Sensitivity analysis parameters of the gas-phase model for NO formation.
x
Parameter Range
Temperature 1500 – 2300 °C
Peak position 15-30 cm from burner inlet
Mixing 12,5 – 150% of previous mixing
Then for the purpose of indentifying the influence of the mixing behaviour, a new
mixing value at 35 cm from the burner inlet is generated. In Figure 4-8 it is shown
how this new value is changed from the basic mixing presented in Figure 4-4. With an
adiabatic combustion approach it was investigated, which effects the new mixing has
on the position of the main combustion.
These new mixing enviroments are applied at two different modelling approaches.
The first one involves a fixed temperature at 2100°C and a fixed peak location at
30 cm from the burner location, comparable to Figure 4-7. For the second approach,
the information how the position of the combustion changes with the new mixing
applied. That means that then the temperature peak position is arranged according to
these results, while keeping the peak temperature also at 2050°C.
18 |
Chalmers University of Technology | Figure 4-8: Changed mixing behavior for the sensitivity analysis from 12.5 to 150% of the previously
used mixing rate at the distance of 35cm from the burner inlet.
Modelling the influence of fuel-NO formation inside the furnace
When calculating the molar nitrogen content of the coals one can find 0.87 mol-% for
the coal 1 and around 1.5 mol-% for the Russian Energy coal. With a molar flow of
the first coal of 1.5 mol/s and an air flow of 25.4 mol/s and the assumption that all
fuel-N is transferred to NO , a total of 517 ppm (at 17% excess O ) of NO can
x 2 x
theoretically be generated via the fuel-NO route. Having in mind that the Coal 2 has
higher nitrogen content than the Coal 1 and the co-firing blends have at least 70%
Coal 1 content, it can be said that all the NO generated with solid fuel combustion at
x
around 250 ppm can have its origin in fuel-N. Because the solids have a nitrogen
component in contrast to liquid and the gaseous fuel experiments, the presence of
fuel-bound nitrogen is examined only for them. Therefore in the simulation of the
influence of fuel-N, a percentage of HCN is added to the fuel [10].
A) Adding a HCN compound to the fuel
In a first approach, to the model with the temperature and mixing profile from before,
a fuel-N compound of 0.87 mol-% (resembling the coal 1) and 1.5 mol-% (resembling
the Coal 2) is added. The influence of this fuel-N on the resulting exhaust levels of
NO is investigated by using the mixing profile from Figure 4-4 and different
x
temperature levels in an idealized temperature profile shown in Figure 4-7.
B) Sensitivity of the conversion of HCN to NO
x
To investigate whether the previous modelling approach of adding HCN to the fuel
can cover the fuel-N release sufficiently a sensitivity analysis was carried out with
parameters that portray the behavior seen during the solid fuel combustion. In Figure
4-9 new mixing profiles of fuel and oxidizer demonstrate the implementation of
delayed burn-up effects and changed mixing behavior.
19 |
Chalmers University of Technology | Figure 4-9: Accounting for fuel and oxidizer mixing behavior for solid fuels.
Different solid fuel combustion effects are also implemented by new temperature
profiles used in a sensitivity analysis showed in Figure 4-10. There a slightly
decreasing slope for co-firing and a more constant slope for coal combustion are
plotted. These results are compared against the simplistic approach of only adding a
fuel-N component and it is checked, which of the new approaches cover the
experimental observations more accurately.
Figure 4-10: Temperature profiles used for the sensitivity analysis of solid fuels.
With the new oxidizer and fuel mixing behavior, different temperature profiles and
varying amounts of nitrogen in the fuel, the combustion chemistry was modelled and
a sensitivity analysis carried out. Table 4-4 gives a general overview how the
parameters were changed. The results are compared to a base case that was set at 0.87
mol-% fuel nitrogen, linear fuel injection and the reference mixing from Figure 4-9
and the temperature profile T 1800°C from Figure 4-10. In the graphs it will also be
coal
checked how the modelling results reflect the measurement results of the solid fuel
combustion. From this information a possible share of fuel nitrogen contribution to
the NO exhaust levels inside the ECF will be elaborated and compared to the full
x
scale KK2.
20 |
Chalmers University of Technology | Table 4-4: Sensitivity analysis parameters for the fuel-N release investigation.
Parameter Range
Fuel-N 0 mol-%, 0.87 mol-%, 1.5 mol-%
Temperature 4 different profiles (peaks at 1700°C – 1900°C)
Mixing Slower and faster mixing
Fuel injection Linear injection and more intensive start
Model simplifications
In Table 4-5 the assumptions that were made to simplify the modelling work are
described. Most of the assumptions have to be made, because the detailed chemical
combustion mechanism does not cover the combustion of long chain hydrocarbons
and coal. Furthermore the detailed mixing and flow behaviour inside the kiln are not
fully known and also not easily measureable and thus have to be approximated by
profiles that comply with the situation between two data points.
Table 4-5: Assumptions to simplify the modelling.
Assumption Grounds for assumption
Natural gas is modelled as Natural gas consists mainly of methane and the other short
methane. hydrocarbons are considered to decompose fast in shorter
chained. The fast decomposition of hydrocarbons is
described by Gardiner [11].
Oil is modelled as methane. Due to high temperatures, an immediate evaporation can be
assumed. Gardiner describes the burn up of oil as beginning
with a fast decomposition in smaller hydrocarbons and then
can be compared to methane combustion [11].
Fuel-N is modelled as HCN. In a variety of investigations it was found out that fuel-N
devolatilizes mainly via NH and HCN [10]. Van der Lans et
3
al. summarize that fuel-N in the conditions of high
temperatures and high λ, the dominant released species is
HCN.
Coal combustion is examined Heterogeneous reactions are difficult to model and are still
with a gas-phase modelling not fully understood [18]. But according to Warnatz et al.
approach and methane [2], volatiles are mainly CH , H , CO, HCN, which react
4 2
combustion. fast. Furthermore char takes place via oxidization by gas-
phase CO and O to CO, which also combusts similar to
2 2
methane.
Devolatilization effects are Even though the volatiles and char might combust fast,
modelled as fuel input profile. devolatilization takes place over a longer course and thus it
will be modelled as a continuous injection of fuel.
Mixing of fuel and oxidizer is Difficulties arise explaining kiln aerodynamics precisely,
represented via a mixing because the exact description of the kiln would require
profile of the air. thorough measurements inside the kiln.
21 |
Chalmers University of Technology | 5. Modelling NO formation
x
The modelling work described in the following part is used to investigate the levels of
NOx measured and explain differences in combustion phenomena with the help of the
detailed combustion mechanism.
5.1. Gas-phase chemistry analysis of experiments
The experimental results showed a similar behaviour of oil and gas during their
combustion in the ECF, especially when it comes to the NO formation. Coal and
x
biomass showed similar outlet levels, but differed in their speed of combustion. The
question raised before was how the NO levels of gas and oil can be so much higher
x
compared to the lower emissions for the solid fuels. This is especially a contradicting
result, when taking the complete lack of fuel-N into account in natural gas and the
very low nitrogen bound in the oil compared to the high amount in coal.
A) Modeled NO formation with the measured temperatures as methane
x
combustion
The measured temperature profiles were at first tested in the modelling. The
simulations with the all the measured profiles resulted in NO outlet concentrations
x
fewer than 10 ppm throughout the fuels.
From these modeling results it can be derived that the measured temperatures were
too low for the corresponding NO levels, because the simulation and real results
x
deviate by the order of a hundred. It is also possible that other effects, like the fuel-
NO formation, that are not covered by the methane combustion model take place for
the oil, coal and biomass. But generally it can be derived that the modeling with the
measured temperatures does not cover the NO formation found in the kiln.
x
It has to be mentioned that the simplistic mixing profile used was not able to result in
the species measured for coal flames, because this profile indicates a lack of oxidizer
and hence mixing in the beginning, whereas the experimental results for the coals
indicate the constant presence of oxygen. The mixing and the temperatures used could
also not resemble the combustion of biomass co-firing flames as they showed
measurable carbon monoxide concentrations throughout the reactor, which were not
to occur with the rapid methane combustion modelling approach.
B) Adiabatic combustion model of oil and natural gas
As the measured temperature profiles do not give a simple solution of the events
leading to the exhaust composition, the NO formation needs to be explained in a
x
more detailed way. To begin with analysing the combustion speed, an adiabatic
calculation of the combustion situation is carried out. By feeding the mixing
behaviour and the inlet conditions of the gases into a simulation, a first impression of
both the heating of the fuel and ignition as well as the slope of the temperature profile
described in Figure 7-1 is gathered. Adiabatic combustion is assuming that no heat
losses occur during the process, what is certainly not the case, but will be covered
later.
The temperature peak here is at 2350°C and the position of the peak occurs 22 cm
after the burner inlet. Furthermore, the temperature drops sharply afterwards, mostly
due to the mixing with more and more combustion air at the temperature of 1100°C.
23 |
Chalmers University of Technology | C) Heat losses
The heat losses were first calculated via an energy balance over the whole furnace and
later also between the different measurement locations inside the ECF. To facilitate
the calculations there, the temperatures of 1400°C for port 2 and 1300°C for port 4
were used for calculations with oil and natural gas. These temperatures were well
within the temperatures measured in this main combustion zone.
The balance over the whole furnace accounted for heat losses of 0.30 kJ/cm s. The
more complex results for the calculations of the different ports are shown in Tables 7-
1 and 7-2. One can see that also this point of view shows high similarity of oil and gas
combustion and so only the oil calculation was used.
Table 5-1: Heat losses from the furnace for oil combustion.
Heat loss Amount [kW] Distance [cm] Heat loss per unit length [kW/cm]
ΔH 91.1 70 1.38
in,port 2
ΔH 93.6 170 0.93
port2,port4
ΔH 185.0 ~1200 0.18
port4,out
Table 5-2: Heat losses from the furnace for natural gas combustion.
Heat loss Amount [kW] Distance [cm] Heat loss per unit length [kW/cm]
ΔH 96.6 70 1.4
in,port 2
ΔH 93.1 170 0.9
port2,port4
ΔH 184.0 ~1200 0.18
port4,out
Both the constant and the varying heat losses were fed into the modelling to convert
the adiabatic combustion into non-adiabatic with heat losses and see what effects that
has on the slope of the temperature.
Two different shapes of the heat loss distribution were applied in Figure 7-3. A
constant heat loss in the left Figure and the heat losses, which are linked to the
temperature data from the measurements, in the right Figure. The constant heat loss
did not change the peak temperature significantly compared to the calculations
without any heat losses. Hence, one can say that the first approach may not cover the
radiative heat losses from the flame to the wall at the burner inlet. This is the reason
for a very high peak temperature, before the flame is cooled down to the same levels
as in the second case. On the contrary, the high heat loss from the beginning in the
second plot may give rise to the assumption that the ignition is delayed too much. The
temperature peak drops thereby to 1950°C.
25 |
Chalmers University of Technology | Figure 5-3:Temperature profiles with different heat loss distributions, A) constant heat losses between
inlet and outlet, B) heat losses linked to the temperature measurements.
NO formation during combustion
x
The two temperature profiles found with the heat losses were then examined for their
resulting NO levels and compared to the measurement data from the natural gas and
x
oil fuels shown in Figure 7-5. The plotted experimental data was averaged over the
main combustion zone to equal the modelling approach with the mixing profile. The
distance at 400 cm describes the outlet conditions.
The lower model meaning a low temperature peak leads to low NO formation and an
x
exhaust of 260 ppm. The higher model with the 2350°C peak seems to capture the
formation pattern quite accurate, but a lot too high. The temperature reaches there the
same peak as in the adiabatic model. The main NO formation takes place before the
x
70 cm measurement point and presumably in the temperature peak, whose position
was found during the adiabatic analysis. The low NO formation for the flame of
x
1950°C gives rise to the assumption that the actual temperature in the flame was
higher than this value, but lower than 2350°C.
D) Fitting the model to the experimental data
With the information from the adiabatic calculations, a simplified temperature profile
inside the furnace was developed and fitted to results of the measured NO levels in
x
port 2 and 4. Different positions of the peak and peak temperatures can cause the
found NO concentration.
x
Figure 7-4 shows the results of possible peak temperatures in relation of the peak
location to reach the measured NO levels. The plot says that for the creation of
x
1200 ppm at port 2, 800 ppm at port 4 and an outlet of around 650 ppm, these
temperatures have to prevail in front of a certain distance of the burner. The NO
x
formation of the fitted temperature profile is shown with the other modelling results in
Figure 7-5.
26 |
Chalmers University of Technology | Figure 5-4: Results of fitting the peak temperature and peak location to the measured NO levels.
x
Figure 5-5: Comparison of NO generation of the simulation results with the measurements. A)-D)
x
indicate the different temperature profiles used in the modelling from the section above.
NO chemistry in the ECF
x
Figure 7-6 presents NO chemistry inside the furnace with the fitted temperature
x
profile described above with a peak temperature of 2095°C, 25 cm away from the
burner inlet. At the onset of the combustion at around 5-10 cm from the burner inlet,
the hydrocarbon radicals present are consumed extremly fast. The formation of NO
x
through that route has, thus, a neglectable impact on total NO formation.
x
Simulations with lower temperature peaks gave NO formation of 5 ppm for 1600°C
x
which can be seen as the part of formed prompt-NO during the combustion.
The thermal-NO formation peaks shortly after the temperature and is the major source
of NO . The short delay of the formation peak relative to the temperature peak could
x
be explained by that in the high temperature region first radicals are generated that
react further to NO . In this region also the amount of oxygen is low, because not
x
enough oxidizer has mixed into the combustion zone yet. In the Figure 7-6 no NO is
x
27 |
Chalmers University of Technology | formed after 50 cm anymore. The temperature is around 1800°C at this distance and
the of activity of the thermal-NO mechanism declines.
Figure 5-6: Thermal and Non-Thermal-NO formation rates for 2095°C peak at 25 cm. (The scale of
the distance was reduced).
5.2. Summary of the gas-phase chemistry modelling
The modeling results show clearly that the measured temperatures do not comply with
the measured concentrations. Simulations with the experimental temperatures
produced much lower NO concentrations than measured. With temperatures profiles
x
estimated from the overall heat balance, the combustion experiments with oil and gas
were well represtented by the simulations with methane. As expected the combustion
of biomass has a different combustion process than the one seen with methane.
Observations of adiabatic combustion and the combustion chemistry showed a very
fast combustion of oil and gas within the first 15 cm - 40 cm after the burner inlet,
where no measurements were taken. The modeling results show temperature levels
over 2050°C that are necessary to produce the measured NO exhaust. Thermal
x
formation of NO totally dominates the NO formation for the combustion of oil and
x x
gas.
During the course of the modeling work it was found out that the temperature, the
mixing pattern and temperature peak location have a major impact on the formation of
nitrogen oxides and are therefore subject of a sensitivity analysis.
28 |
Chalmers University of Technology | 5.3. Sensitivity analysis for gas-phase chemistry
Sensitivity of the NO formation for different temperature peaks
x
The peak temperatures and the peak location are diametrically important to NO outlet
x
concentrations. In accordance with the modeling results before, the peak is most
probably occuring in some distance after 20 cm from the burner inlet. The next
modeling analysis is therefore dedicated to the effects of the temperature levels for 4
different peak locations of 20 cm, 25 cm, 30 cm and 35 cm.
In Figure 7-7, the results for the temperature sensitivity are plotted for the 4 different
peak locations. It is shown that the peak location has some effect for the exhaust NO
x
concentration, especially the hotter the peak temperature gets in the range above
2000°C. But what can be seen more clearly is the decisive influence of the
temperature on the NO formation. Considerable amounts of NO can only be
x x
generated if the combustion gases stay above temperatures of 1900°C for a sufficient
long time. The formation itself accelerates exponentially above this temperature.
Figure 5-7: NO exhaust concentration for different temperature peak locations in relation to the
x
temperature in the peak.
Sensitivity of the NO formation related to the mixing behavior
x
A sensitivity analysis of the mixing behaviour between the inlet and the first
measurementp positions (70 cm) is carried out. This region turned out to be crucial to
the NO formation inside the furnace. With the new mixing behaviour at 35 cm
x
distance of the burner new adiabatic calculations of the furnace were conducted. The
adiabatic calculations did not change the peak temperatures, thus, the NO formation
x
was basically the same for all new mixing behaviours at 2350°C. In all the adiabatic
cases around 2600-2800 ppm of NO were formed. Just the position of the
x
temperature peak and with it the thermal-NO peak changed from 13 cm after the
burner inlet for 150% mixing of air to 43 cm for 12.5% mixing.
The approach with the adiabatic temperatures did not result in any hints of how a
different mixing pattern is influencing the NO outlet conditions, but it showed that
x
29 |
Chalmers University of Technology | the main combustion zone will move closer or away from the burner inlet. To
examine this effect, two cases were tested. The first one involves a fixed temperature
profile with a peak of 2100°C and only the mixing is changed. The second test case
has also a peak temperature of 2100°C but the position of the peak is arranged from
the results of the adiabatic calculations. For faster mixing (150% of reference) the
combustion moves to 13 cm from the inlet. For slower mixing the peak moves to
43 cm.
The results of the two approaches are shown in Figure 7-8. Assuming a fixed peak
and a fixed temperature, a different mixing could provide enormous potential to
mitigate NO according to the simulation results. In the fixed temperature case, the
x
potential shows up to be 90% lower NO levels when reducing the mixing by 87.5%.
x
The later case included the effects of lower mixing on the temperature peak. This is
assumed to resemble the reality more likely, but then also shows lesser NO reduction,
x
maxing at around 50%.
The result of the fixed peak location shows that the amount of oxidizer and with it the
present oxygen in the peak is a key factor for thermal-NO generation. The lesser
amount of oxygen present at this point, the lesser the NO formation. In reality it can
x
be assumed that a worse mixing will also slow the reaction and therefore the peak will
occur later, where again more oxygen is present. So for the more realistic varied peak,
the NO mitigation potential is lower. For the modelling itself it means that the exact
x
mixing information is not that important for the stability of the results, as long as the
peak position is arranged to fit chemical reactions inside the furnace.
For the sake of completeness it has to mentioned that the temperature was kept at
2100°C for the different mixing behaviours. A worse mixing will possibly also lead to
a lower peak temperature, so that the mitigation effect could be bigger than the results
shown here.
Figure 5-8: NO generated considering different mixing patterns at 2100°C peak temperature. For the
x
first graph, the peak position was varied according to the adiabatic calculation results. The second
graph shows the influence of mixing with a fixed temperature peak.
30 |
Chalmers University of Technology | 5.4. Impact of fuel nitrogen on NO formation
x
During the combustion of coal and the biomass co-firing it was discovered that the
combustion is not restricted by the mixing rate of oxidizer and fuel, but more the
combustion of the particles itself was the limiting process for the reaction velocity.
The steady built up of NOx could not be explained with the methane combustion
model.
Sensitivity analysis of the impact of fuel nitrogen
In Table 7-3 the differences in the simulations by adding 0.87 mol-% and 1.5 mol-%
of HCN as a devolatilization source of Fuel-N to the methane combustion model are
presented. Generally, the difference is around 30 ppm. It seems that for higher
temperatures, larger amounts of HCN follow the route towards NO. From the
simulations of both 0.87 mol-% and 1.5 mol-% it is found out there is not a certain
percentage of HCN converted to NO , because the NO yields only slightly differ.
x x
Table 5-3: Effects of adding fuel-N in the form of HCN compared to a case without Fuel-N. Methane
combustion and temperature peak at 25 cm from the burner inlet. NO measured at 17.2% O . In this
x 2
table, the NO levels of the pilot and full scale units fired with and without fuel-N are indicated as ECF
x
and KK2 respectively.
HCN 0 mol-% HCN 0.87mol-% HCN 1.5 mol-%
Temperature NO [ppm] NO [ppm] NO [ppm]
x x x
1500°C 3 31 37
1600°C 4 32 36
1700°C 10 37 42
1800°C 29 59 64
KK2
oil KK2
coal
1900°C 93 125 130
2000°C 273 315 ECF +ECF 323
coal bio
2100°C 740 ECF +ECF 784 790
oil gas
2200°C 1544 1610 1623
In these simulations with fuel-N, still the major part of the NO that is formed in the
x
ECF has to be generated via the thermal-NO route, because only 30-65 ppm of the
280 ppm for coal 1 could be connected to the fuel-N contribution. The conversion rate
of fuel-N to NO ranges from 5.8% at 1500°C to 13.5% at 2100°C.
x
At the full scale KK2, it can be assumed that around 70 ppm of nitrogen oxides are
linked to the conversion of fuel-N to NO , from the difference of oil and coal
x
combustion there. Assuming that the combustion of oil causes slightly higher peak
temperatures than coal there, even 90-100 ppm of NO via the fuel-NO route are
x
probable for the coal. In Table 7-3 it is shown that higher temperatures promote the
formation of NO from fuel bound nitrogen. Taking this into account, it seems more
x
realistic that more than 100 ppm of the 280 ppm in the ECF stems from the fuel-NO
route.
31 |
Chalmers University of Technology | Reaction path analysis of Fuel-N
The comparison of the fuel-N modelling results with the full scale unit was
contradicting, so that the chemical analysis of the modelling with HCN was examined
in more depth in Figure 7-9. There one can see that the HCN is consumed fast while
moving towards the peak location and is fully converted before 20 cm. Thereby only
around 7% of the HCN is converted to NO . Conversion rates for fuel bound nitrogen
x
to NO are usually found above 20%. [9] This together with the comparison with the
x
full scale indicates that modelling with HCN from the beginning will convert bigger
parts of it towards N than can be expected in reality.
2
The chemical modelling of the solid particle combustion as a methane flame deviates
also too much from the measurements of the species CO and CO inside the furnace.
2
From the measurements, the fuel-N can be assumed to be released over a longer
period of time, during the particle burn-up and not entirely from the beginning as
shown in Figure 7-9. Furthermore, with the whole fuel modelled as inserted from the
beginning, the combustion will lead to a period of time, where no oxygen is present,
what was not measured for the coal and only for a small part of the co-firing
experiments.
Figure 5-9: Modelled species for solid particle combustion as a methane flame with 0.87mol-% HCN
added. Simplified mixing and temperature profiles as shown before with 1900°C peak temperature at
25 cm distance from the burner.
Gas-phase modelling approach to cover fuel-N release
From the chemical analysis it becomes quite clear that the approach used sophistically
for oil and gas shows some serious drawbacks for the coal and co-firing combustion.
For the solid fuels a new modelling environment has to be created to cover the effects
seen during the experiments, which were:
- NO was measured already at 20 cm distance for both burners, what supposed
x
the early presence of oxygen in the combustion zone. When all fuel is
modelled from the beginning, an oxygen lean situation would be created that
was not present in reality
- CO was measured throughout all ports, which indicates an ongoing fuel burn-
up, whereas the methane modelling proposing instantaneous burn-up.
32 |
Chalmers University of Technology | - Constant NO built-up for the coal burner indicates high temperatures over a
x
longer period than before and/or the ongoing release of Fuel-N.
- Built-up of CO throughout the furnace.
2
The described phenomena showed to be not able to model with the assumption that all
fuel is inserted at the beginning, because this would lead to complete absence of
oxygen in the first part of the furnace. In this oxygen lean situation, the formation of
NO is strongly limited, what contradicts the measurement results.
x
With new mixing profiles and a continuous fuel injection, the measured O and CO
2 2
concentration from the experiments are fairly well represented by the model. Figures
7-10 and 7.-11 show the chemical species of the modelling with the changed
modelling environment. The first case for the coal is now able to represent the effect
of continuous combustion and a NO formation that is increasing towards port 2. CO
x 2
is formed throughout the furnace, but because of the big amounts of air mixed into the
combustion zone, the levels are constant.
Figure 5-10: Modelling result for a coal temperature profile at 1700°C, continuous fuel injection and a
simplified oxidizer mixing. 0.87 mol% HCN as Fuel-N added.
The co-firing experiment in Figure 7-11 was modelled with a fuel injection that is
more intensive in the beginning, but continuous over a long distance in the reactor.
With this the very high CO levels in the beginning can be reproduced, but also the
2
early NO built-up effect is gained. Both approaches lead to a conversion rate of fuel-
x
N to nitrogen oxides of over 20% or a contribution of more than 100 ppm. This is due
to the fact that this time not all the HCN is reacting in a high temperature and oxygen
lean environment, which is found close to the burner inlet in the approach before.
33 |
Chalmers University of Technology | Figure 5-11: Modelling result for a co-firing temperature profile at 1800°C, continuous fuel injection
with stronger injection in the beginning and a simplified oxidizer mixing. 0.87 mol% HCN as Fuel-N
added.
5.5. Sensitivity analysis of the fuel-NO contribution
Proving that the new modelling approaches cover the chemistry inside the kiln more
accurately, a sensitivity analysis is carried out to see the influence of each single
parameter on the model. Within the sensitivity analysis, added Fuel-N, different
temperature levels, different fuel injection and oxidizer mixing patterns are varied, to
see which modelling results finally compare to the measurement results.
Fuel-N contribution to NO formation
x
In Figure 7-12, a linear fuel injection into the furnace was used to cover the effect of
an ongoing release of volatiles and burn-up of fuel. For the used temperature profile,
the NO originating from the 0.87 mol-% HCN, which was used to model the fuel-N
x
compound, was then around 130 ppm. For 1.5 mol-%, around 230 ppm of fuel-NO
was formed more compared to the case without any fuel-N contribution. This
corresponds to a fuel-N conversion rate for both HCN concentrations of 27%. In this
modelling approach, the fuel-NO accounts for around 50% of the total NO created.
x x
Modelling the fuel-N without the influence of the thermal-NO route (below 1500°C)
gave an outlet concentration of 140 ppm. This shows that the fuel-N contribution is
quite independent of the temperature situation at the general conditions inside the
ECF. Comparing the simplistic approach with all the fuel modelled from the
beginning and the ramped injection of the fuel, the position inside the furnace and the
combustion conditions there determine which amount of fuel-N is later forming NO
x
or N .
2
34 |
Chalmers University of Technology | Figure 5-12: Influence of fuel-N on total NO formation. Fuel-N modelled as HCN compound in fuel.
x
Thermal NO influence on total NO levels
x
In the gas-phase modelling the thermal-NO formation was rising exponentially with
x
the temperature, when crossing the border of around 1800°C. The same conclusion
can be drawn from the modelling of the solid fuels. The rise of 100°C in temperature
will double the NO formation as can be seen in Figure 7-13. The thermal-NO
x
formation is still the key criteria for lower overall NO emissions.
x
Figure 5-13: Influence of different temperature levels on total NO formation.
x
The influence of the fuel injection pattern to the furnace
It was described that the combustion during the co-firing was more intense close to
the burner than the combustion with the reference burner. To examine this effect, the
fuel release was modelled with two different fuel release behaviors. In Figure 7-14 it
can be proven that an early release of the fuel will lower the overall NO emissions.
x
This has already been seen before, when the complete fuel was inserted in the
beginning of the modelling.
35 |
Chalmers University of Technology | The high amount of fuel is reducing the oxygen present during the time when HCN is
converted and therefore the route towards N instead of NO is more promoted, the
2 x
earlier the fuel is released in the furnace.
Figure 5-14: Influence of difference fuel release profiles on the total NO formation.
x
Influence of the mixing behavior
Varying the speed in which the oxygen is mixed into the main combustion zone
showed no big influence on the total NO levels during the combustion at 1800°C,
x
unlike detected for the gas-phase modelling at 2050°C. In Figure 7-15 it seems more
like a certain amount of NO is generated that is just diluted at different speeds by
x
differing the mixing behaviors. It has to be noted that still all the mixing patterns
account for continuous oxygen rich conditions, because the fuel is inserted gradually.
For the modelling this means that the amount of NO formed is quite independent
x
from the mixing behavior and so the size of the modelled combustion zone is arranged
by the mixing pattern.
Figure 5-15: Influence of the oxidizer mixing on the total NO formation.
x
36 |
Chalmers University of Technology | 5.6. Summary of the modelling work on fuel nitrogen
Adding a usual fuel-N devolatilization component, HCN, to the methane increased the
NO formation by around 30 ppm. The HCN to N conversion was in this case
x 2
surprisingly fast and at a distance of 20 cm more than 90% of the HCN was converted
to N and less than 10% HCN was later found as NO .
2 x
A gradual injection of fuel into the main combustion zone, simulating the release of
volatile nitrogen gave more reasonable results with respect to in-furnace NO , O and
x 2
CO concentrations. Because solid fuels can be described better with a continuous
2
combustion than with an instantaneous one like gas and oil, this injection pattern was
used thereafter.
With the new injection pattern of fuel, it was found out that the timing of the release
of the HCN compound is of high significance for lowering NO emissions. An early
x
release will promote the conversion of HCN to N . The reason for this occurrence
2
could be both the lower concentration of oxygen and the higher temperatures during
the early time in the reactor.
The mixing of oxygen has in this approach only a minor influence on the NO
x
exhaust, because almost throughout the furnace there is an abundance of oxygen. But
as before, the temperature is the key factor in avoiding NO emissions. The detailed
x
chemical reaction mechanism also indicates a promotion of the conversion of fuel-N
to NO at higher temperatures.
x
In summary, the modelling results show that between 10%, for fast combustion, to
30%, for the ramped combustion, of the fuel bound nitrogen is later found as NO .
x
This corresponds to 50 – 150 ppm of fuel-NO contribution. In the ECF a total of
x
around 250 ppm was found, so the corresponding share is 20% - 60%. In the
KK2, 100-125 ppm total NO emissions are found. Hence, the fuel bound nitrogen
x
can contribute to 50-100% of the total NO emissions there. Assuming that especially
x
the first modelling approach with the whole HCN modelled from the beginning
underestimates the importance of fuel-N, the fuel NO contribution is considered to be
x
higher for both plants than 20% and 50% respectively.
In Table 8-1, the results for the furnaces are summarized. The results show clearly
that the combustion situation is different in the ECF compared to the KK2 unit. The
combustion is several hundred degrees hotter in the pilot scale ECF leading to other
NO formation behavior than in the full scale kiln.
x
Table 5-4: Comparison of the different furnaces according to their NO formation.
x
Parameter ECF coal/BM ECF gas/oil KK2 coal KK2 oil
Peak temperature 1700 – 1850 °C 2050 – 2180°C <1700°C ~ 1850°C
Total NO ~250 ppm ~680 ppm 100 – 125 ppm 50 – 115 ppm
x
Fuel-NO of total 30-60% 0% 70-100% 0%
x
NO
x
37 |
Chalmers University of Technology | 6. Conclusions
In this work, the formation of nitrogen oxides in a rotary kiln test facility was
examined. For this purpose, the data of an experimental campaign was evaluated and
the nitrogen chemistry inside the furnace modelled.
The experimental evaluation revealed problems with the position of the main
combustion zone and the measured temperatures. In Table 8-1 it is shown that coal
and biomass led generally to lower NO formation than oil and gas inside the ECF.
x
The full scale plant KK2 instead exhausts considerably lower emissions for both oil
and coal, but in contrast to the ECF, the oil causes even lower NO formation there
x
than coal.
Because the real temperatures in the kiln remained not totally clear after the
measurement campaign, the contribution of thermal-NO to the total concentration can
only be examined accurately for oil and gas, because there it is by far the major
contributor. The gas-phase modelling for these two fuels revealed that peak
temperatures of over 2000°C had to be present inside the experimental furnace to
cause the measured NO levels, but only around 1500°C were measured.
x
The sensitivity analysis of the gas-phase chemistry showed that the NO formation in
x
the kiln is highly dependent on the temperature in the furnace. Mixing of fuel and
oxidizer as well as the position of the temperature peak, have minor influence on the
formation.
For solid fuel combustion the quantity of NO formation that can be traced to fuel-N
x
was investigated. In a sensitivity analysis it was shown that the fuel-NO accounts for
x
roughly the half of the NO emissions and considerable amounts are formed via the
x
thermal-NO route in the ECF. This is in contrast with the full scale KK2 plant, where
x
NO from the fuel bound nitrogen prevails. The conversion of fuel-N to NO is
x x
strongly dependent on the time, when the nitrogen is released.
39 |
Chalmers University of Technology | 7. Proposals for future campaigns
The main aim of this work was to elaborate proposals to improve the oncoming
measurement campaigns and their research outcome.
7.1. Improving the measurement quality
The main problems seen within the campaign were the temperature measurements,
which were found to be too low to meet the other measurements data and the position
of the flame. So the first thing to improve is the arrangement of the temperature probe,
especially its gas inlet velocity, so that it can measure sufficiently in this high
temperature environment. With correct temperature measurements, more accurate
estimates of the thermal-NO formation can be conducted.
The shifted position of the flame poses also a possible source of error for the correct
modelling of the NO formation. The oil, gas and coal reference burner have to be
x
investigated if they are creating the shifted flame. A possible examination could
consist of mounting the burners in different angles compared to the former installation
to investigate if the burners create the defect. It can also be possible that the air inlets
create the flame shift and therefore the investigation of a reduced or increased flow
through the air hoods could shed light if this effect creates the fluctuations.
The missing measurements for oil and gas combustion at port 1 are a major drawback
not only for the modelling of the two fuels but also to draw conclusions for the overall
gas-phase modelling of the ECF. It was shown that the combustion is very fast inside
the ECF close to the burner. Hence the measurements at port 2 could only capture the
outcome of the combustion. But for the later examination of heterogeneous reaction
paths, the detailed knowledge about the gas-phase reactions at this point is
indispensable.
7.2. Comparing NO formation in full scale and pilot scale
x
The KK2 unit produces exhaust levels of around 110 ppm during the combustion of
coal and to some extend lower emissions for oil. In the ECF the effect was opposite,
because oil and gas released 2.5 times higher emission of NO than coal. Generally,
x
the ECF produced distinctly higher levels for all fuels ranging from 215 ppm to
680 ppm. The measurement campaigns during the ULNOX project pointed at a
limited comparability of the two furnaces and also this work comes to the conclusion
that the combustion situation inside both differs sharply.
The difference between coal and oil in the KK2 is described by the contribution of
fuel-N from the coal. NO formed during the combustion of oil can mainly be linked
x
to thermal-NO formation. The somewhat higher peak temperatures occurring for oil
are overcompensated by the fuel-NO.
In the ECF instead, the main contribution is triggered by the high temperatures that
must beat the temperatures in the KK2 by at least 100°C to generate the measured
NO exhaust levels. A more precise specification of the difference fails not only by
x
the lack of correct measurement data of the ECF, but also a detailed examination of
the flame and combustion situation inside the KK2 is missing.
To get information from experiments at the ECF that can be applied also at full scale,
the shape of the flame and its peak temperature is a key factor. The flow situation is
shown to be of minor importance. Therefore extensive temperature measurements are
41 |
Chalmers University of Technology | necessary inside the full scale kiln. But also correct temperature measurements inside
the ECF could facilitate the examination of different fuels with their fuel-N release
and speed of combustion.
7.3. Future modelling of the NO formation
x
As soon as the temperature measurements are accurate they can be included as input
factors into the modelling work and then give precise information of the thermal-NO
formation. A modelling approach should investigate how far the gas-phase chemistry
is independent of the presence of fuel-N in the furnace. With information of the
interaction of thermal and fuel generated NO , first precise evaluation of thermal-NO
x
for solid fuels can be derived and afterwards the fuel-NO contribution estimated
accurately. Also a more detailed analysis of the heterogeneous NO formation
x
mechanisms can be carried out to check how the particles interact with the gases.
Knowing where and how much fuel-NO is formed will help to develop models that
can predict the outcome of several mitigation strategies. The major drawback so far is
the lack of knowledge of how the fuel-N path is influenced by the high amount of
oxidizer present and the very high temperatures.
When thinking about the KK2 it could be beneficial to have a simplistic gas-phase
model of the chemical reactions inside it as well, to have a reference to compare the
results. Therefore the oil combustion could be examined and directly compared to the
same situation in the ECF. This information could then help, to make the ECF more
comparable to the full scale unit.
42 |
Chalmers University of Technology | ANTON KULLH
JOSEFINE ÄLMEGRAN
Abstract
The current low platinum price has put Concentrator (MNC), Limpopo, South
high pressure on the industry and forced Africa. MNC is ranked as the biggest
companies to introduce cost cutting efforts single-stream platinum concentrator in
as well as productivity increasing actions. South Africa and one of the largest
Increasing the productivity can be done facilities of its type in the world (Mining
either by increasing the output or Weekly, 2008).
decreasing the amount of consumed
The Master’s thesis writers have developed
resources. This project has focused on the
a method for calculating Overall
latter. There are several productivity
Equipment Effectiveness (OEE) in a
increasing methods, such as Total
comminution process. The method
Productivity Maintenance (TPM) and Lean,
incorporates a method to calculate quality,
to utilise. In the mining industry these
which is a parameter that has previously
methods have not been used to the same
not been defined for a comminution
extent as in, for instance, the automobile
process.
industry to improve productivity. Existing
research in mining mostly deals with the
A method called Pain analysis has been
technical aspects of the process, such as
developed by the Master’s thesis writers to
optimising single units.
display duration and frequency of the
reasons that cause the stops in the process.
This project has three distinct phases and
This new way of displaying stop data has
will use the incorporated tools of TPM and
been appreciated by its users and has
Lean to, firstly define a calculation model
received positive response from the
of equipment performance metrics for a
organisation.
single stream comminution process.
Secondly, a tool to perform real time
The developed Overall Productivity Tool
calculations of defined metrics will be
(OPT) is at this stage a fully functional
developed. Thirdly, a method for using the
software used by MNC in daily work. The
tool output in the organisation in a value
methods and day-to-day tools developed in
creating way, with primary focus on finding
this Master’s thesis project will be
root-causes to productivity limiting issues,
incorporated in new software developed by
will be designed.
Anglo American Platinum. The software is
to be implemented throughout the
The project is a collaboration between
organisation.
Chalmers University of Technology,
Gothenburg, Sweden, and the University of
Answers to the research questions are
Cape Town, South Africa. The project
provided at the end of the report as well as
sponsor is Anglo American Platinum and
recommendations for the operations at
the plant where the project has been
Mogalakwena North Concentrator.
conducted is Mogalakwena North
efficiency, quality, availability, performance,
overall utilization, process pain, platinum,
Keywords: Overall Equipment concentrator, Overall Productivity Tool
Effectiveness (OEE), productivity, (OPT)
III |
Chalmers University of Technology | ACKNOWLEDGEMENTS
A number of people have been highly every day. We would also like to thank the
valuable to us in this Master’s thesis project. metallurgists on site, Albert Blom, Sithi
We would like to mention them here and Mazibuko, Felix Mokoele, Herman Kemp
send them our deepest gratitude for and Howard Saffy, who always have been
assisting us throughout the project. patient with our questions and given their
input to our work.
Great thanks to our examiner Prof. Magnus
Evertsson (Chalmers University of We also want to send our gratitude to the
Technology) as well as to our supervisors brilliant PhD students and at Chalmers
Dr. Erik Hulthén (Chalmers University of Rock Processing Systems Johannes Quist
Technology) and Dr. Aubrey Mainza and Gauti Asbjörnsson for their invaluable
(University of Cape Town), who together input and encouragement in all sorts of
initiated this project. Without their early matters throughout the entire project.
vision of creating a Master’s thesis project
We extend a huge thanks to Barbara
focused on increasing the productivity at
Andersen at the University of Cape Town
Mogalakwena North Concentrator (MNC),
who arranged everything we could possibly
this project would not have been born.
need during our entire stay in South Africa.
We would also like to thank Neville Plint Without her help we would have been
and Gary Humphries at Anglo American forced to put more effort into booking
Platinum, who always have been just an flights than typing code.
email away and provided guidance and
Finally, we would like to thank all
feedback throughout the project.
employees at Mogalakwena North
Thanks to Senior Concentrator Manager Concentrator for supporting our work,
Barry Davis for his positive attitude to this patiently allowing us to ask questions and
project from day one, for authorising us to providing useful answers.
get access to the plant as well as his office
Last but not least, we would like to thank
and providing us feedback throughout the
our families and friends for all
project. We give many thanks also to Plant
encouragement and support during this
Manager Ellie Moshoane and Technical
exciting journey from the 5th floor of the
Manager Dane Gavin who have acted as Mechanical Engineering building at
supervisors on site and supported our work Chalmers to one of the world’s largest
and provided reflections and feedback platinum concentrators.
V |
Chalmers University of Technology | CHAPTER 1 - Introduction
1.1 PROJECT INTRODUCTION refined platinum produced by their own
mines amounted to about 44 tons in 2011.
The project is a collaboration between
Chalmers University of Technology, To operate more effectively and efficiently
Gothenburg, Sweden, and the University of Anglo Platinum recently accomplished a
Cape Town, South Africa. The sponsor of thorough reconstruction and they now
the project is Anglo American Platinum operate nine individual mines around
and the plant where the project has been South Africa. One of them is Mogalakwena
conducted is Mogalakwena North Mine, which is situated 30 kilometres
Concentrator (MNC), Limpopo, South northwest of Mokopane in the Limpopo
Africa. MNC is ranked as the largest single- province and operates under a mining right
stream platinum concentrator in South covering a total area of 137 square
Africa and one of the largest facilities of its kilometres (Anglo American, 2012).
type in the world (Mining Weekly, 2008). Mogalakwena Mine provides ore to
Mogalakwena South Concentrator (MSC)
The examiner of this thesis is Prof. Magnus and Mogalakwena North Concentrator
Evertsson (Chalmers University of (MNC). MNC is ranked as the largest
Technology). Dr. Erik Hulthén (Chalmers single-stream platinum concentrator in
University of Technology) and Dr. Aubrey South Africa and one of the largest
Mainza (University of Cape Town) have facilities of its type in the world and is the
acted as supervisors. The writers of this plant where this Master’s thesis project was
Master’s thesis are B.Sc. Anton Kullh conducted (Mining Weekly, 2008).
(Chalmers University of Technology) and
B.Sc. Josefine Älmegran (Chalmers
University of Technology). 1.3 BACKGROUND
This project was initiated by the supervisors South Africa accounts for nearly 80% of
who wanted to investigate how to increase the global platinum production, which
productivity in a comminution process. The makes the platinum price highly influenced
project scope has thereafter evolved by the economy of the country.
throughout the project as the writers have
During the last five years there has been a
gained more knowledge in the area of
large decrease in the platinum spot price. In
research. It is important to have a valid
March 2008 the price peaked at 2273
measure of productivity in order to be able
USD/oz t (Kitco, 2012), which can be
to increase it and to be able to judge if your
compared to the spot price at the start of
efforts are contributing to productivity
this project (mid-August 2012); 1485
improvements. The final scope is set as
USD/oz t (Kitco, 2012).
three distinct phases and can be viewed
under 1.5 Project Scope.
The price drop has several explanations.
Firstly, the price spiked in 2008 due to a
prospected supply shortage. Secondly, the
1.2 COMPANY INTRODUCTION
sector has experienced wage inflation in
Anglo American Platinum Limited is a excess of the general inflation. Thirdly, the
South African company which holds about strengthening of the rand has decreased the
40% of the world’s newly mined platinum, gap between dollar-denominated sales and
making them the world leading primary rand-based costs. As a result of this, some
producer of platinum. The equivalent of high cost mines have had trouble running
2 |
Chalmers University of Technology | CHAPTER 1 - Introduction
profitable operations during recent years. can be monitored. Further, it will allow for
(Mail & Guardian, 2011) the results of performed productivity
increasing actions to be analysed and
This significant drop in price has put high
evaluated. This is in agreement with the
pressure on the industry and forced
author of the book TPM Vägen till ständiga
companies to introduce cost cutting efforts
förbättringar Örjan Ljungberg:
as well as productivity increasing actions.
This project will deal with the aspects of “
What you do not measure, you cannot control
productivity.
and what you cannot control, you cannot
”
An increase in productivity can gain several improve. (Ljungberg, 2000 p.37)
stakeholders and be profitable not only for
the company itself, but also for the nearby The methods TPM and Lean and their
incorporated tools will found a basis for
communities, the region and the country.
this Master’s thesis project, which is divided
into three distinct phases. Firstly, a
1.4 EARLIER EFFORTS IN calculation model of equipment
performance metrics for a single stream
THIS AREA
comminution process will be defined. This
In the mining industry TPM and Lean have calculation model will be aligned with the
not been used to the same extent to Anglo American Equipment Performance
improve productivity as in, for instance, the Metrics Standard and fully functional for a
automobile industry. There is some single-stream comminution process.
research literature on productivity Secondly, a tool to perform real-time
increasing efforts in comminution processes calculations of the defined metrics will be
but it mostly deals with the technical developed. To achieve this, user friendly
aspects of the process, such as optimising a software which automatically computes the
single unit, i.e. a ball mill. This approach defined equipment performance metrics for
can result in an unintended sub- the equipment included in the project will
optimisation instead of an optimisation of be developed. Thirdly, a method will be
the entire process chain. The research has a designed for using the tool output in the
gap concerning the usage of the above organisation in a value creating way. The
mentioned methodologies to improve the primary focus will be on designing a
total productivity of the comminution method for finding root-causes to
process. Therefore, this thesis aims to help productivity limiting issues.
fill the gap and explain how to work with
productivity increasing efforts throughout
the entire process, instead of only in single
units.
1.5 PROJECT SCOPE
To increase productivity, the process needs
to be completely comprehended and
controlled. It is highly important to have a
valid and accurate method of calculating
equipment performance metrics, so that it
3 |
Chalmers University of Technology | CHAPTER 1 - Introduction
Define1 a calcul ation 2 Desi3gn a met hod
Develop a tool that
model for OEE & describing how to use
calculates OEE &
other equipment the tool output in the
other equipment
performance metrics organisation with
performance metrics
in a single stream primary focus on
in real time
process finding root causes
Figure 1 – The three phases of this Master’s thesis project
1.6 RESEARCH QUESTIONS
1.7 DELIMITATIONS
To fulfil the purpose of this Master’s thesis,
The following research questions have been
the project concentrated on the initial part
formulated for this Master’s thesis. The
of the comminution circuit of a minerals
research questions cover the areas of
processing plant, which at MNC includes
Overall Equipment Effectiveness (OEE),
the primary gyratory crushing, the
Performance indicators, productivity and
secondary crushing, HPGR-crushing,
SHE (Safety, Hygiene, Environment).
classifiers, feeders, and conveyors.
1. How can a method be developed to
The research was limited to this area for
define and rank process units
two reasons; it is a good idea to start the
critical to productivity in a
implementation in a small scale and it
comminution process?
would have been too time consuming for
2. How should OEE numbers be the Master’s thesis to include a larger
calculated in a single-stream section of the plant.
comminution process?
The decision to limit the project to a sub-
3. Which factors in the process chain set of the plant is supported by Idhammar
are more critical to productivity – (2010) who argues that implementing OEE
according to the OEE philosophy? in a part of the plant will facilitate an
implementation throughout the plant at a
4. How can OEE be used as a
later stage. Idhammar also states that an
performance measure of equipment
early pilot can eliminate issues and provide
and process performance?
useful training and experience for staff
members.
5. How can OEE help to improve
SHE (Safety, Hygiene,
The developed tool will be fully functional,
Environment)?
however not completely integrated into the
Scada and PI-system at the plant.
6. How can measuring OEE help to
improve productivity?
4 |
Chalmers University of Technology | CHAPTER 2 – Theoretical Framework
2.1 PRODUCTIVITY maintenance team do everything
(Nakajima, 1989). Ahuja (2011) recognises
A general definition is that productivity is that for a long time companies have seen
the amount of output per unit of consumed maintenance as a static support activity
resources or total costs incurred. Thus, instead of as a key component for revenue
productivity is defined as how efficiently generation, which he claims it is. TPM
the resources are being utilised in the demands collaboration between all
production of goods or services. Increasing functions in an organisation even though
the productivity can hence be done by the most extensive is found between
increasing the output, in terms of volume production and maintenance where the
and quality, or decreasing the amount of greatest synergies can be created. TPM is
consumed resources with a constant or aimed at changing peoples’ mind-set
increased output. (Prokopenko, 1992) towards maintenance rather than providing
a perfect tool that will solve all problems
Output (Ahuja, 2011). According to Smith and
Productivity= Equation 1
Input Hawkins (2004) the primary focus of TPM
should be to reduce and eliminate the
effectiveness losses, since this is where the
The essence of productivity improvement is
biggest gains can be achieved.
working more intelligently, not working
harder. There are several methods to help
increase productivity. Two of the most
2.3 OVERALL EQUIPMENT
well-known are Total Productivity
Maintenance (TPM) and Lean. These EFFECTIVENESS (OEE)
methods, and some of their incorporated
There are many ways of measuring how
tools, will be described in more detail in the
well-functioning a certain unit is. One of
following sections.
the most common methods in the industry
is to use availability as a metric. Other
2.2 TOTAL PRODUCTIVE common methods are MWT (Mean
Waiting Time), MTTR (Mean Time To
MAINTENANCE (TPM)
Repair) and MTBF (Mean Time Between
Failure). However, these parameters, and
Total Productive Maintenance (TPM) was
many other common parameters, do not
first coined in Japan in the beginning of the
give a comprehensive view of units and
nineteen seventies as a way of increasing
equipment if displayed alone. Included in
the availability of machines and equipment
TPM, there is a metric called OEE (Overall
by better utilising the maintenance
Equipment Effectiveness), which gives a
resources. Simply put, it is about keeping
more inclusive view of the value added by
machines and equipment in good condition
the unit, compared to other metrics.
without interfering with daily production.
(Ljungberg, 2000)
The methodology was created in order to
support the Japanese effort to implement
It is important to identify the factors that
Lean in most of their industries. The
limit the process from receiving higher
fundamental idea of TPM is to involve the
operators in the maintenance and make
sure that they conduct most of the day-to-
day activities instead of having the
6 |
Chalmers University of Technology | CHAPTER 2 – Theoretical Framework
OEE Availability x Performance x Quality Equation 2
Gross Operating Time
Availability
Planned Production Time
Equation 3
Planned Production Time Unplanned Downtime
=
Planned Production Time
Net Operating Time Ideal Cycle Time
Performance
Gross Operating Time Operating Time Equation 4
Total Pieces
Valuable Operating Time Good Pieces
Quality Equation 5
Net Operating Time Total Pieces
effectiveness and to understand how the
process is performing. OEE is a method
that can help to do this by giving a better
understanding of how well a process is
performing and by identifying what the
limiting factors are. (Hansen, 2001)
An OEE number of 100% corresponds to a
unit which is performing at its maximum
capacity – always running, always at the
optimal speed and producing perfect
quality.
It is essential to note that OEE is more
than just one number; it is, in total, four,
which are all individually useful. The OEE
measurement combines the availability of
the machine, the performance rate and the
quality rate in one equation (M. Maran, et
al., 2012)
7 |
Chalmers University of Technology | CHAPTER 2 – Theoretical Framework
2.3.1 GENERAL OEE DEFINITION
There is no established standard for how to Quality is the ratio between Valuable
calculate OEE for a single stream Operating Time and Quality Losses, where
comminution process. A version frequently Valuable Operating Time is Net Operating
used in the manufacturing industry, derived Time minus Quality Losses, or the ratio
from (Nakajima, 1989), is presented in between Good Pieces and Total Pieces.
Equation 2, and explained in more detail in The OEE calculation parameters are
Equation 3, 4, 5 and Figure 2. described in Figure 2.
Availability is the ratio between Gross
Operating Time and Planned Production
Time, where Gross Operating Time is
Planned Production Time minus
Unplanned Downtime. For definitions of
parameters, see Figure 2.
Performance is the ratio between Net
Operating Time and Gross Operating Time,
where Net Operating Time is Gross
Operating Time minus Speed Losses, or the
ratio between the actual speed and nominal,
budgeted, or target cycle time.
Figure: Freely after Method and a system for improving the operability of a production plant
Figure 2 – Definitions of the general OEE parameters
8 |
Chalmers University of Technology | CHAPTER 2 – Theoretical Framework
The Performance metric (see Equation 7) is defined the world-class level at 85 % and it
stated as “The portion of the OEE Metric is composed of an Availability of 90%,
which represents the production rate at Performance of 95% and Quality of 99%,
which the operation runs as a percentage of which creates Equation 9.
its targeted rate”. It is calculated as the
According to Hansen (2001) very few
ratio between Actual Production Rate and
companies calculate OEE or use it to
Target Production Rate, where Actual
maintain and set new priorities. He has
Production Rate is the ratio between
defined different levels of OEE for
Actual Production Achieved and Primary
companies to aim for, which can be
Production (P200), whereas the Target
described as follows:
Production Rate is defined as an input.
(Anglo American Equipment Performance
‐ < 65 % Unacceptable. Money is
Metrics, 2012)
constantly lost. Take action!
‐ 65-75 % OK, only if improving trends
The Quality is stated as “The portion of the
can be shown over a quarterly basis.
OEE Metric which represents the Quality
‐ 75-85 % Pretty good. But keep working
achieved at an operation as a percentage of
towards the world-class level.
its targeted Quality” and is calculated as
the ratio between Actual Quality and
According to Hansen (2001) a batch type
Target Quality (see Equation 8). Both the
process should have a world-class OEE
numerator and the denominator are stated
of >85 %, for continuous discrete processes
as inputs and no calculation method for
the OEE value should be >90 % and for
them is provided in the Anglo American
continuous on stream processes the OEE
Equipment Performance Metrics. (Anglo
value should be 95 % or better.
American Equipment Performance Metrics,
2012) 2.3.4 OEE ECONOMICS
2.3.3 OEE BENCHMARK
It is often hard to measure the financial
benefits of proposed improvement projects
According to M. Lesshammar (1999) most
and it is easy to oversee important projects
equipment’s OEE ranges from 40-60 %,
and instead prioritise average projects.
whereas the world-class level is said to be
Bottlenecks are what prevent a process
85 %. Smith and Hawkins (2004) have
Primary Production P200
Overall Utilisation Equation 6
Total time T000
Actual Production Rate
Performance
Target Production Rate
Equation 7
Actual Production Achieved /Primary Production
=
Target Production Rate
Actual Quality
Quality Equation 8
Target Quality
90 % Equipment Availability x 95 % Performance Efficiency
Equation 9
x 99 % Rate of Quality = 84.6 % OEE
10 |
Chalmers University of Technology | CHAPTER 2 – Theoretical Framework
throughput and limits a plant from Continuous improvement is the process of
becoming effective; therefore, bottlenecks making incremental improvements, no
should be the first place where OEE is matter how small, to achieve the lean goal
applied. In order to prioritise the OEE of eliminating all waste that does not add
improvement projects relative to the any value but only adds cost.
average ones, it is important to be able to Kaizen teaches employees skills to work
show the financial gains. (Hansen, 2001) more effectively in smaller groups, solving
problems, documenting and improving
Hansen (2001) has shown that there is a
processes, collecting and analysing data,
link between OEE and critical financial
and also to self-manage within the peer
ratios and that a company that understands
group. (Liker and Convis, 2012)
and applies OEE improvement projects
will harvest dividends year after year, since The concepts of Kaizen started in the early
OEE improvement projects work to days of Toyota and included the now
eliminate the root causes of problems. famous concepts of just-in-time (JIT),
process flow and quality improvements.
According to Hansen (2001) a 10 %
increase of OEE from 60 % to 66 % will Kaizen can be divided into six main steps
give: which became the basis for the Toyota
Kaizen course developed by the company
‐ 21 % increase of Return on assets
in the 1970’s. (Kato and Smalley, 2011)
(ROA)
‐ 10 % increase of capacity 1. Discover Improvement Potential
‐ 21 % improvement of the operating
2. Analyse the Current Methods
income
3. Generate Original Ideas
He also states that starting on a low OEE,
rather than on a high OEE, makes it easier
4. Develop an Implementation Plan
to find opportunities to improve.
5. Implement the Plan
Ahlmann (2002) discusses the financial
implications of an increased OEE from 60 % 6. Evaluate the New Method
to 80 % in Swedish industry and argues that
It has been understood that, in realtity,
it shows a 20 % economic improvement.
continuous improvements are impossible
since some parts of the process sometimes
2.4 CONTINUOUS IMPROVEMENT need to be operated in the same way as the
day before. Everything cannot be changed
– KAIZEN (改善)
to the better every day. Continuous
improvement is a vision, a dream, which no
The term kaizen, in Japanese, literally
company can totally master. (Liker and
means change (kai) for the better (zen).
Franz, 2011)
Kaizen is defined by Oxford Dictionaries as:
“
a Japanese business philosophy of continuous
improvement of working practices, personal
”
efficiency, etc..
11 |
Chalmers University of Technology | CHAPTER 2 – Theoretical Framework
2.4.1 THE FIVE WHYS – 5 WHYS 2.5 PERFORMANCE
MEASUREMENTS
One root-cause finding technique included
in the kaizen methodology is the 5 WHYs.
Using performance measures is a
It implies to ask why a problem exists five
procedure aimed at collecting and
times, going to a deeper level with each
reporting information regarding the
why until the root cause of the problem is
performance of an operation or individual
found. The user of the technique should
parts thereof. This procedure can help the
take countermeasures at the deepest level
organisation to define and measure the
feasible of cause and at the level that will
goals it is aiming to achieve.
prevent reoccurrence of the problem.
(Liker and Convis, 2012) In the industry, performance measures are
most often denoted as KPIs (Key
To visualize the root causes, which may be
Performance Indicators). Widely used KPI
multiple, an Ishikawa diagram (also known
metrics are, for instance, cycle time, Mean
as a fishbone diagram) can be used in order
Time Between Failure (MTBF) and
to create a clear picture of the current
utilisation. (Taylor Fitz-Gibbon, 1990)
situation and to map out the possible root
causes (see Figure 4). (Perrin, 2008) Halachmi (2005) elaborates on the logic of
reasons in support of introducing
performance measurement as a promising
way to improve performance. This
strengthens the motives for measuring the
performance of the operations.
Picture: Real World Project Management
Figure 4 - Ishikawa diagram
12 |
Chalmers University of Technology | CHAPTER 2 – Theoretical Framework
“ One factor that will influence MTTR is the
If you do not measure results, you cannot tell
severity of the breakdown; another factor is
success from failure… the quality of the maintenance itself. A
high MTTR should be approached with
If you cannot recognize failure, you will repeat
good troubleshooting methods, to quickly
old mistakes and keep wasting resources.
identify the root cause, and the
If you cannot relate results to consumed
maintenance actions should be reviewed
resources, you do not know what is the real regularly to identify improvement
“ opportunities. (Mahadevan, 2009)
cost...
(Halachmi, 2005, p.504)
2.6 VALUE STREAM
MAPPING (VSM)
2.5.1 MEAN TIME BETWEEN
FAILURES (MTBF) Value Stream Mapping (VSM) is a lean
manufacturing technique used to analyse
Mean Time Between Failures (MTBF) is a the flow of materials and information in a
measure of asset reliability. It is the average system.
time between one failure and another
Value Stream Mapping involves all process
failure for repairable assets (see Equation
steps, both value added and non-value
10). An increasing MTBF indicates
added ones. In that way Value Stream
improved asset reliability. MTBF is best
Mapping can be used as a visual tool to
when used on asset or component level and
help identify the hidden waste and sources
should be performed on critical assets and
of waste. Preferably, a current state map
trended over time. Low MTBF numbers
should be drawn to document how things
should be approached with analysis (i.e.,
actually proceed in the process. Thereafter,
root-cause failure analysis (RCFA) or
a future state map should be developed to
failure mode and effect analysis (FMEA))
shape a lean process which has eliminated
in order to identify how the asset reliability
root causes of waste.
can be improved. (Gulati, 2009)
Rich et al. (2006) defined the seven Value
2.5.2 MEAN TIME TO REPAIR
Stream Mapping tools as:
(MTTR)
‐ Process Activity Mapping
Mean Time To Repair (MTTR) is a ‐ Supply Chain Responsiveness Matrix
measure of the average time required to ‐ Product Variety Funnel
restore an asset’s back to working condition ‐ Quality Filter Mapping
(see Equation 11). In the context of ‐ Forrester Effect Mapping
maintenance, MTTR is comprised of two ‐ Decision Point Analysis
parts; the first is the identification of the ‐ Overall Structure Maps
problem and the required repairs; the
second is the actual repair of the equipment.
MTBF Uptime / Number of stops Equation 10
Equipment Downtime Time
MTTR Equation 11
Number of stops
13 |
Chalmers University of Technology | CHAPTER 2 – Theoretical Framework
2.7 RACI MATRIX
In a large organisation where ‐ Consulting (C) – This role is not
responsibilities are divided between several accountable or responsible for the
parties and decisions impact many core consequences of the decision made but
functions, it is important that shall be consulted in the decision
responsibilities are clear and involve making process.
different parties across the firm, especially
‐ Information (I) – This group includes
in the decision-making process. The RACI
other persons in the organisations that
Matrix is a method to manage decision
will be impacted by the decision and its
allocation processes (see Table 1). RACI is
outcome shall be kept in the
an acronym for Responsibility,
information loop.
Accountability, Consulting and
Information and stands for different roles According to Dressler (2004), the RACI
in the decision process. (Dressler, 2004) Matrix is used by many effective
organisations to clarify ambiguous decision
Dressler (2004) defines the building blocks
areas and solve decision conflicts upfront.
of the RACI Matrix as follows:
This to give people a clear understanding
‐ Responsibility (R) – The role about their roles in regards to contributing
responsible for decisions that fall under to an efficient decision making process.
their area of responsibility within the
organisation. This is an active and
important role in the decision making
process.
‐ Accountability (A) – This role is the
person in charge of the individual
taking on the “Responsibility” role and
carries the accountability for the
decision made.
Table 1 – An example of how a RACI Matrix can be created. R=Responsible,
A=Accountable, C=Consulted, I=Informed.
Person A Person B Person C Person D Person E
Decision A A C I R
Decision B R A I C
Decision C C R A I
14 |
Chalmers University of Technology | CHAPTER 2 – Theoretical Framework
2.8 PLATINUM must perform reliably for long periods of
time at high temperatures under oxidising
Platinum was discovered in 1735 in South conditions. Platinum is also used as a
America by Ulloa and can be found catalyst in cracking petroleum products.
occurring naturally, accompanied by small Currently there is a high interest in the use
quantities of iridium, osmium, palladium, of platinum as a catalyst in fuel cells and in
ruthenium and rhodium, all of which antipollution devices for automobiles. (The
belong to the same group of metals, the PGM Database, 2012)
Platinum Group Metals (PGM) (The PGM
Database, 2012). Platinum is one of the The price of platinum has varied widely; in
rarest elements in the Earth's crust and has the 1890’s it was cheap enough to be used
an average to adulterate gold. But in 1920, platinum
abundance of was nearly eight times as valuable as gold.
approximately platinum The spot price on 2012-08-15 was
5μg/kg. Other approximately 1395 USD/oz t (1 oz t =
precious metals like 78 31.103 g), which can be compared to the
gold, copper and gold price for the same day; 1594 USD/oz t
nickel denote Pt (Kitco, 2012).
concentration in ores
A comparison between platinum and its
in percentages, but
195.08 more well-known periodic table neighbour
platinum denotes
– gold can be viewed in Table 2.
this in parts per
million. Based on a
Figure 5 – Platinum in
Table 2 - A comparison between platinum and
typical conversion the periodic system
gold.
rate of 25%, 14 tons
of ore are required to produce 10 grams of Freely from The PGM Database (2012)
platinum. (Probert, 2012) Platinum Gold
Chemical Symbol Pt Au
Platinum, iridium and osmium are the
Atomic number 78 79
densest known metals. Platinum is 11%
Atomic weight 195.084 196.967
denser than gold and about twice the
Density (g/cm3) 21.45 19.30
weight of the same volume of silver or lead.
Platinum is soft, ductile and resistant to Melting point (°C) 1769 1064
oxidation and high temperature corrosion. Vickers hardness 549 216
It has widespread catalytic uses. (Platinum (MPa)
Today, 2012) Electrical resistivity 105 22.14
(nohm-cm at 20°C)
In 2009, approximately 45% of the world's
Tensile strength 125-240 120
platinum was used in automotive catalytic (MPa)
converters, which reduce noxious emissions
from vehicles. Jewellery accounted for 39%
of demand and industrial uses accounted
for the rest. (Anglo Platinum, 2011)
Examples of its industrial uses are high-
temperature electric furnaces, coating
missile nose cones, jet engine fuel nozzles
and gas-turbine blades. These components
15 |
Chalmers University of Technology | CHAPTER 2 – Theoretical Framework
2.9 EXTRACTION OF matte that is richer than the
concentrate in platinum-group metals.
PLATINUM-GROUP METALS
‐ Step four is to produce a, either
(PGM’S)
through magnetic concentration or by
To get pure platinum a long process has to leaching separate platinum-group
be followed. The extraction of platinum- elements in the converter matte, very
group metals is described by Crundwell, et rich platinum-group metal concentrate
al. (2011) in the following five steps: containing about 60% platinum-group
elements.
‐ Step one is to mine ore with a high
concentrate of platinum-group metals ‐ The last step is to refine this
while leaving rock lean in platinum- concentrate to individual platinum-
group metals behind. group metals with purities in excess of
99.9%.
‐ Step two is to comminute the mined ore
into powder and isolate the platinum- In general the concentrating and
group elements in the ore by creating a smelting/converting are done in or near the
flotation concentrate consisting of mining region while the refining is done in
nickel-copper-iron sulfides that has a the region or in distant refineries.
high content of platinum-group (Crundwell et al., 2011)
elements.
‐ Step three is to smelt and convert this
concentrate to a nickel-copper sulphide
16 |
Chalmers University of Technology | CHAPTER 3 – Methodology
3.1 RESEARCH STRATEGY
LONG-TERM SUCCESS
The research has been conducted in
collaboration with the stakeholders at
Action research is also an appropriate
Mogalakwena North Concentrator (MNC),
strategy when looking at the long term
in order to jointly solve the problem. This
success of the project, long after the
is, according to Bryman and Bell (2011), a
completion of the thesis. In order for the
typical case where action research should
new activities to be deeply rooted within
be chosen as a research strategy, since
the organisation it is very beneficial that
answering the research questions demands
key power groups within the organisation
an iterative process. Action research also
have been a part of the improvement
allows the researchers to keep an open
process from the beginning (Nadler &
mind toward the problem at hand and to
Tushman, 1997). This strategy also makes it
go back and forth between theory and
easier for the researchers to truly
practice in order to compare the results
understand the system since it demands a
from data collection with theory, and to
lot of interaction between the different
generate a thorough analysis which can be
stakeholders (Bryman & Bell, 2011).
revised.
The assumption in action research is that
3.2 RESEARCH APPROACH
the natural surroundings, in which the
problem occurs, is the best place to study The chosen approach for this thesis is an
the problem. The data collection associated abductive one. When using an abductive
with action research includes both approach theory, empirical data and
qualitative and quantitative methods. analysis are developed simultaneously in an
Qualitative research is a research in which iterative process (Bryman & Bell, 2011).
words are more emphasised than quantities This research approach is suitable when
during data collection, whereas quantitative using action research since it involves the
research puts more emphasis on quantified mind-set of involving theory and empirical
data. information at the same time in order to
develop an analysis that will answer the
GROUNDED THEORY
research questions. As the understanding
of the situation increases, the need for new
A proper methodology when conducting
theory will most likely occur and this suits
action research is to use Grounded theory,
the description of an abductive approach
which is defined as theory that is “derived
very well.
from data, systematically gathered and
analysed through the research process.”
(Strauss & Corbin, 1998, pp.12). This
3.3 THEORY
implies that data collection, analysis and
eventual theory are closely related in an Before travelling down to South Africa the
iterative process. Strauss and Corbin pre-study was initiated in order for the
(1998) claim that grounded theories, since researchers to gain the basic knowledge
they are drawn from data, are likely to needed to understand the problem, become
offer insight, enhance understanding and familiar with the available process data and
act as a meaningful guide. the available theories in the area. Since
Anglo American Platinum already used
18 |
Chalmers University of Technology | CHAPTER 3 – Methodology
OEE as a measure, the study of OEE and well as the daily work and methods used.
therefore also TPM became a natural part The observations also acted as indications
of the studies. TPM also led the path to of what theoretical fields were interesting
studies of other maintenance fields, such as for further study and in that way led to a
Lean maintenance, which is closely linked more focused literature study.
to improving productivity. The studies of
OEE were broadened by studying the
INTERVIEWS
economic factors associated with improving
A major part of the qualitative data was
OEE numbers, which also provided
collected through numerous interviews with
arguments for working with OEE
persons involved in the production process
improvements.
as well as management. The character of
the interview was dependent on the
After core studies regarding the process in
position of the interviewee and the type of
general and TPM methodologies, methods
information sought (qualitative or
for identifying bottlenecks such as Value
quantitative). The basic approach was an
stream mapping, were studied more closely
unstructured interview in order for the
since they offered a very clear path to
interviewee to further elaborate on the
evaluate the production chain and direct
questions asked. In an unstructured
the improvements to the right sections.
interview, the interviewers do not follow
The remaining part of the literature study a strict structure of questions, but instead
was conducted during the empirical study might have only one or a few questions to
on site in cases where the empirical findings answer. The interview therefore resembles
resulted in new theoretical aspects to study. a conversation. (Bryman & Bell, 2011)
This is also in line with the chosen
abductive approach (Bryman & Bell, 2011). SECONDARY DATA
3.3.1 DATA COLLECTION The major part of the data is secondary in
the sense that it stems from information
METHODS
from metallurgists and specialists, and that
no long-term observations beside the
The data needed for this thesis has mainly
machines have been conducted. Still, the
been collected through interviews,
data is in most cases measured over a long
observations and by using secondary data.
period of time, which in a sense increases
Most of the quantitative data was collected
its accuracy. This gathering of data also
through gathering data from the PI
limited the cost of the project, though the
database whereas the qualitative data was
information might be difficult to
collected through semi-structured
understand and interpret and there is
interviews, which is also the usual first
always a risk that some important
step of engagement in the action
information may be left out of the material
research approach (Scheinberg, 2009).
handed to the researchers (Bryman and
OBSERVATIONS Bell, 2011). The process data used for
calculation can also be seen as secondary
In order for the researchers to acquire their data. The process data comes from the PI
own understanding of the situation, database were more than hundreds of
observations were conducted on site. This thousands of different measurement points
helped with the understanding of the
are logged.
specific steps in the production process as
19 |
Chalmers University of Technology | CHAPTER 3 – Methodology
3.4 RELIABILITY relevant to this thesis since one research
question aims to explore how certain
The reliability concerns the results of the changes affect the productivity of the
project and whether they are repeatable or process. The cause and effect relations are
not (Bryman & Bell, 2011). Achieving high closely linked to the internal validity and
reliability when using action research is a these have been tested through
difficult task since the purpose is to change triangulation and pattern matching. The
people’s mind-set and the environment in validity has also been ensured by using well
which they act. By taking field notes known methods and tools such as TPM,
throughout the entire project and also by OEE and Lean.
keeping a diary, the researchers’ intention
has been to write down all important 3.5.3 EXTERNAL VALIDITY
aspects of the thesis. The research is based
At first, the external validity can be
on a combination of quantitative and
regarded as rather low since the study aims
qualitative data compared with existing
at improving the specific site in question.
methodologies within the area of increasing
However, one of the research questions
productivity. This approach ensures that
(number 1, see section 1.6) deals with how
the results are best practice from a
to develop a method, applicable at a
theoretical point of view applied in the
general plant, to define and rank process
specific context.
units, which gives the thesis an increased
external validity. To create an external
3.5 VALIDITY valid thesis, proven methods have been
used and general equations and definitions
The validity of the project deals with the
have been presented. This will help others
issue of whether the right aspects were
to interpret the content of the thesis into
studied in order to answer the research
other contexts and therefore increase the
questions. Bryman and Bell (2011) propose
external validity. The aim for the
to measure four different aspects in order
researchers has been to develop an
to determine the validity of the thesis;
aggregated method which is generic and
construct validity, internal validity, external
can be successfully implemented at similar
validity and ecological validity.
plants.
3.5.1 CONSTRUCT VALIDITY
3.5.4 ECOLOGICAL VALIDITY
The construct validity is regarded as high
The ecological validity concerns whether
since data triangulation was used in cases
the findings are applicable to everyday life.
where previous measurements were
In this case, the findings are highly
compared with data collection on site.
applicable to day-to-day operations in the
Since the researchers have spent extensive
process. Since the data has been collected
time on site, the possibility of measuring
from the daily operations and
critical aspects on several occasions was
improvements have been done in the actual
good.
production equipment, even though in a
small scale at first, the ecological validity
3.5.2 INTERNAL VALIDITY
has to be considered as high. The pitfall
might be that of the Hawthorne studies,
The internal validity deals with the issue of
which implies that people perform better
causality. The internal validity is highly
20 |
Chalmers University of Technology | CHAPTER 4 - Data
objects in a path predetermined by the FEEDER
design of the conveyor. The conveyor can A feeder is a unit that puts material in
be horizontal, inclined or vertical in its motion. Its purpose is to regulate the
design. At MNC, all bulk transports are amount of material that, for example, is fed
performed by conveyors and the total into a crusher or from a storage silo onto a
length of all conveyors is approximately conveyor.
9000 metres.
4.3.4 AREA AFFILIATIONS OF
CLASSIFIER UNITS
A classifier is a unit which classifies the
material physically by separating it based The following section will present the name
on its particle size. At a concentrator this and type of the units included in the project
can be performed by, for instance, a grizzly, based on their area affiliation. The
screen or cyclone, all which have the same presentation order of the following tables is
purpose but perform the separation of equal to the material flow order in the
material in different ways. process.
AREA 102
Table 4 – The name and number of units in process area 102
TYPE OF UNIT NO. OF NAME
UNITS
Comminution unit 1 102-CR-001 Primary crusher
Feeder 2 102-FE-001 & 102-FE-002
Conveyor 2 102-CV-001 & 102-CV-002
Classifier 0 -
AREA 401
Table 5 - The name and number of units in process area 401
TYPE OF UNIT NO. OF NAME
UNITS
Comminution unit 0 -
Feeder 6 401-FE-001 - 401-FE-006
Conveyor 1 401-CV-001
Classifier 1 401-GY-001 Grizzly
AREA 405
Table 6 - The name and number of units in process area 405
TYPE OF UNIT NO. OF NAME
UNITS
Comminution unit 3 405-CR-001 – 405-CR-003 Secondary Crushers
1,2,3
Feeder 5 405-FE-001 – 405-FE-005
Conveyor 6 405-CV-001 – 405-CV-006
Classifier 2 405-SC-001 & 405-SC-002 Secondary Screen 1 & 2
27 |
Chalmers University of Technology | CHAPTER 5 - Results
CHAPTER 5 – INTRODUCTION TO RESULTS
Define a calculation Design a method
1 Develo2p a tool that 3
model for OEE & describing how to use
calculates OEE &
other equipment the tool output in the
other equipment
performance metrics organisation with
performance metrics
in a single stream primary focus on
in real time
process finding root causes
Figure 7 – Project phases
The results are presented according to the The calculation model covers the five
three distinct project phases (see Figure 7). metrics; OEE, Availability, Utilised
The first phase was to define a calculation Uptime, Mean Time Between Failure
model for OEE and other equipment (MTBF) & Mean Time To Repair (MTTR)
performance metrics. The second phase and Pain (see Figure 9). The final
was to develop a tool (OPT) which uses the calculation for these metrics will be
calculation model to perform real time presented in the following sections. The
calculations of OEE and other equipment calculations are based on the Anglo
performance metrics. The third phase was American Equipment Performance Metrics
to design a methodology to use in the definitions and have taken inspiration from
organisation to find root causes to a customised calculation model developed
productivity limiting issues. Moreover, this by the Master’s thesis writers (see section
chapter includes a presentation of how to 5.4) to suit the process of MNC, since the
calculate OEE in a General Single Stream Anglo American definition is not
Process and results from the new procedure comprehensive enough to be used in this
for Crusher and Mill Stops Reporting that process. However, since it is a company
was introduced at MNC by the two standard, the aim has been to use it as
Master’s thesis writers. extensively as possible and it has been a
wish from the company not to deviate from
the standard whenever possible.
5.1 CALCULATION MODEL
In the following
section, the results
1
Define a
of the first phase
(see Figure 8) of model
this project will be
presented. This
part contains the Figure 8 – Project
calculation model phase 1 - Define
for OEE and other
equipment performance metrics in a 24/7
single stream comminution process.
32 |
Chalmers University of Technology | CHAPTER 5 - Results
OEE
MTBF
Utilised Process
Availability &
Uptime Pain
MTTR
Overall
Performance Quality
Utilisation
Figure 9 - Equipment metrics included in the calculation model
5.1.1 FINAL OEE CALCULATION
The final OEE equations used in all
To clarify, Overall Utilisation is the metric
calculations will be presented in this section.
corresponding to the general OEE
The metrics consist of three parts - Overall
calculation’s measure known as Availability,
Utilisation, Performance and Quality (see
displaying the equipment usage, however
Equation 12). The three components of
not calculated in the same way or with the
OEE can also be used as individual metrics.
same result.
The components of the OEE equation are
presented in the following sections.
PERFORMANCE
Equipment Performance is calculated
OVERALL UTILISATION
according to the Anglo American
To determine the utilised time of a unit, the
Equipment Performance Metrics as the
ratio between Primary production time and
ratio between Actual Production Rate and
Total time is used. Primary production time
Target Production Rate. The Performance
is defined as “Time equipment is utilised
essentially indicates how efficiently the unit
for production” and Total time is defined
has been working, i.e. to what degree the
as “The total possible hours available”.
unit has been doing things in the correct
These definitions are taken from the Anglo
way.
American Equipment Performance Metrics.
For explanations of the equation
components, see Figure 10.
OEE Overall Utilisation x Performance x Quality Equation 12
Primary Production
Overall Utilisation Equation 13
Total time
Actual Production Rate
Performance
Target Production Rate
Equation 14
Actual Production Achieved /Primary Production
=
Target Production Rate
33 |
Chalmers University of Technology | CHAPTER 5 - Results
Figure: Freely after Anglo American Equipment Performance Metrics Time Model
Figure 10 - Time definitions by Anglo American Equipment Performance Metrics
The Actual Production Rate is the ratio equation compares the Actual Particle Size
between Actual Production Achieved and at a certain point in the process to the
Primary Production, which both can be Target Particle Size for that point (see
calculated with data drawn from the PI- Equation 15).
database. The Target Production Rate,
The Quality is defined as the mean
however, is an input measure which has to
deviation above Target Size as a percentage
be defined for every single unit.
of the Target Size (see Figure 11). This
implies that all particles below target size
QUALITY
result in zero deviation, hence a 100%
The method to calculate the product Quality. To achieve the metric Quality
Quality was developed by the Master’s suitable for the OEE calculation and not
thesis writers. The Quality looks at the the deviation, the ratio is subtracted from 1.
particle size and shows to what extent the For instance, if all particles are below
particle size is below the targeted size. The Target Size, the Quality will be 100%. If
target and actual particle size concerns the some particles are above Target Size, the
P80 value, which is a commonly used value equation will compute their individual
in the comminution industry. P80 is defined deviation from the Target Size. Together
as the size where 80 percent of the material with the particle sizes below the Target
passes a certain upper size limit. The Size, which all are regarded as having no
Mean deviation from Target Size Equation 15
Quality1
Target Size
n
Deviation from target size
i1 i
n
1
Target Size
34 |
Chalmers University of Technology | CHAPTER 5 - Results
Uptime
Availability Equation 16
Total Time
Overall Utilisation
Utilised Uptime Equation 17
Availability
Uptime
MTBF Equation 18
Number of stops
Equipment Downtime Time
MTTR Equation 19
Number of stops
5.1.3 UTILISED UPTIME 5.1.5 PAIN ANALYSIS
The Master’s thesis writers have developed The fifth metric in the calculation model is
a new metric to display the ratio between Pain. In order to help focus the efforts in
Overall Utilisation and Availability, called the daily work at the plant, this analysis
Utilised Uptime (see Equation 17). This is tool, which includes a new way of
the third metric in the calculation model. displaying failures, has been developed by
The Utilised Uptime shows the percentage the Master’s thesis writers. The analysis
of the available time that is used for provides a metric called Pain and looks at
Primary Production. This metric is possible how much pain a certain stop has caused a
to compute for the units where Overall unit. Instead of the user being forced to
Utilisation can be calculated. look at and compare both the frequency of
a stop and its downtime in order to find the
most painful problem, Pain can be used to
5.1.4 MTBF & MTTR give an aggregated view.
The fourth metric in the calculation model The Pain is calculated as the product of
is the combination of Mean Time Between frequency of the stop and total downtime
Failures (MTBF) and Mean Time To caused by the stop (see Equation 20). This
Repair (MTTR). These two metrics are gives a total view of the pain the problem
calculated to give an indication of asset has caused. The unit of Pain is time
reliability and the quality of maintenance (minutes) but can be regarded with minor
work. For explanations of the equation importance since it does not display an
components, see Figure 10. actual downtime but the sum of all
downtimes multiplied by the number of
stoppages. Therefore, the Pain is used as a
unit-less metric and displayed unit-less in
the tool.
Pain Frequency of error Total downtime caused by error =
failure, type_x
n
n n
Equation 20
failure, type_x failure, type_x,n
i1
n t t ...t
failure, type_x failure, type_x, 1 failure, type_x, 2 failure, type_x, n
36 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.