University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Chalmers University of Technology | 4. Results
Screen feeders
90 110
level 405
100
80 Level 406
90
70
80
60
70
50 60
40 50
40
30
30
20
20
10 Feeder 1 CTRL-signal
10
Feeder 2 CTRL-signal
0 0
0 1 2 3 0 1 2 3
time [s] 104 time [s] 104
Figure 4.4: (Left) The resulting control signals for the two screen feeders. (Right)
The level in the screen bin.
control the level to 50 % for both sides of the bin. In Figure 4.5 the inflow rate and
the circulating load are plotted. The circulating load stabilizes after about 2 hours
of the simulation, and the inflow tries to regulate the level in the HPGR bin. The
settling time of that control loop is very long in addition to the reasoning regarding
the level in the HPGR bin. The circuit inflow was saturated between 280 and 1900
[tph].
One important requirement when using the HPGR is that the crusher is choke fed.
In Figure 4.6 the weight of the chute for the 8 hours simulation is shown, the in-
terlock function will stop the feeders if the weight increase over 10 tons and change
the HPGR speed to 25 % if it goes below 2 tons. During the simulation, the level
is oscillating in the beginning and stabilizes to the set-point of 6 [tons] after about
1,5 hours.
The performance of the control layer was not compared to logged control signals
from the actual plant for any validation purposes. A comparison of this type would
be fruitful in assessing how well the replicated setup corresponds to the actual one.
The version of the control system simulated in this work shows that the plant model
can be controlled into its steady state level for the given inputs. The response time
of some of the controllers is very slow, and no attempt has been made to reduce
the response time. However, this could be beneficial for the real plant, being more
flexible in the start and stop situations. The actual plant currently has a start-up
time of 45 to 60 minutes; the simulated HPGR controller reaches its set-point after
39
]%[
langis
lortnoC
]%[
leveL |
Chalmers University of Technology | 4. Results
roughly 1,5 hours. This indicates that the replicated control system is slower than
the actual plant.
4.3 MPC controller performance
The MPC controller was used to investigate a potential upside of the production
at section 406 at Mogalakwena North by the introduction of a new controller. The
plots are the result of a simulation using the current circuit configuration adapted
for use with an MPC controller. This includes skipping the coupling between the
inflow feeders and the level in the HPGR bin and introducing a new PID controller
that controls the screen feeders. For the screen feeders, since there is no available
weightometer around them, a model was used to supply the PID controller with a
process value.
The HPGR was limited to 3100[tph] as that is currently the maximum load the
product conveyor can take. The HPGR bin feeders and the screen bin feeder were
also allowed to throttle up to 3100[tph]. The Figure 4.7- 4.11 show the controlled
variables, theoutputandthecomputationtimeforthecontrollerforan8-hoursimu-
lation. The simulations were carried out on a Dell Latitude e5430 with an i7-3540M
processor and 16 Gb RAM running MATLAB Simulink 2017a [19].
The simulation results show that the MPC controller can control the circuit over
a longer period. The here simulated case includes a maximization criterion which
pushes the controls to max and risk instability in a real world situation. This should
be kept in mind, especially since the prediction horizon’s size for the specific circuit
have not been thoroughly investigated in this work. Also the since the HPGR and
the screen feeder have the same output rate the only way to reduce the level in the
screen bin is to cut back on the HPGR when both controls are at maximum.
The rise time when using the MPC to feed set-points is fairly long in this case,
additionally since the MPC was in control from the start of the simulation and
not to risk instability the control actions were limited. In Figure 4.7 the output is
oscillating in the beginning due to the controller supplying inconsistent set points
and failing to find optimal solutions to the optimization problem. The output settles
to 1725[tph] after little over 2 hours of the simulation.
The bins were started at 50% and 60% filled respectively and the Screen bin reaches
its set point first of the bins. This can be seen in Figure 4.9. The start up of the
HPGR is slow; this is an effect of the carefulness needed to increase the speed of the
HPGRwithoutriskingtoloosethelevelinthefeedchutetothecrusher. Thiscauses
the MPC controller to overestimate the amount of material needed to supply the
feed bin with, and the level grows up to just under 90 % before the HPGR catches
up. This can be resolved by tuning of the controller, especially the slew rate. The
rise of the set point and the output of the HPGR are plotted in Figure 4.8
The inflow during the simulation is shown in Figure 4.10 and is the variable that
should be used as to make sure the level in the HPGR bin is kept close to its set-
point. It is over predicting the rate at the beginning of the simulation, because of
the reason mentioned above, after that it settles in and reaches a level at 1700[tph].
The feeder PI-controller that uses the set-point is fast and the difference between
41 |
Chalmers University of Technology | 4. Results
Chute weight
25
20
15
10
5
Chute weight
0
0 0.5 1 1.5 2 2.5 3
time [s] 104
Figure 4.12: The weight of the chute during an 8 hour simulation with the MPC
controller.
4.4 Evaluation of circuit changes
During site visits and discussion at the corporate office in Johannesburg, interest
was shown in investigating the effect of changing the screen deck apertures; recently
the apertures were changed to increase the throughput of the HPGR circuit. The
effect on the circuit throughput for different screen apertures have been simulated
in this work. The HPGR-screens are today 10 by 10 mm size unworn.
4.4.1 Using the current control setup
When using the current control setup of the plant except the APC based set point
selector on the screen bin, Figure 4.13 can be generated. The HPGR throughput set
pointwassetat2400[tph]andtherestofthecircuitinitsoriginalconfiguration. The
screen apertures used for the simulation were 6, 8, 10 and 12 mm. The simulations
were started from stand still and therefore the results in Figure 4.13 on the left side
are slightly lower since they include the startup sequence. Each simulation ran for
4 hours. Increasing the screen aperture increases the production rate as suspected.
The increase from 10 to 12 mm is larger than expected and should be subject to
investigation.
45
]snot[
,thgieW |
Chalmers University of Technology | 4. Results
PID - Start up PID - Steady state
2000 2000
1800 1800
1600 1600
1400 1400
1200 1200
1000 1000
800 800
600 600
400 400
WIT010B Start up WIT010B Steady
WIT416 Start up WIT416 Steady
200 200
WIT433 Start up WIT433 Steady
0 0
4 6 8 10 12 14 4 6 8 10 12 14
Screen aperture [mm] Screen aperture [mm]
Figure 4.13: Simulation results from 4 different screen apertures for both a startup
sequence and a steady state sequence.
4.4.2 Using the MPC
A similar exercise as with the PID set up was done with the developed MPC con-
troller. The results are very similar but here showing a larger throughput. The
large step between 10 and 12 [mm] is visible in theses simulations as well. The MPC
ran for slightly longer than the PID setup, and a simulation lasted 8 hours and the
first 2 hours were included in the start up sequence and have been omitted when
calculating the steady state values in the right plot of Figure 4.14.
4.4.3 Comparing the two controllers
In Figure 4.15 the two controllers are compared regarding the tonnage placed on the
circuit product belt, CV007. In general, the MPC controller can output more than
the PID controlled circuit. The MPC had more capacity in terms of the crusher
and the feeders, however comparing the difference between calculating the set-point
once every ten seconds and having an operator overseeing it is preferable to have it
supervised by the controller. Being surer the circuit is stable, in essence, allows for
selecting a higher throughput as the operating point. The results in Figure ?? shows
that there is a potential upside in using a model predictive controller to supervise
the set points on circuit 406. The main difference between the current control setup
is that level in the HPGR bin have all three components affecting it monitored
in the controller and especially the circulating load, which previously acted as a
disturbance on the level in the bin.
46
]HPT[
wolf
ssaM
]HPT[
wolf
ssaM |
Chalmers University of Technology | 5
Conclusion
This chapter will present the conclusions from work within the master’s thesis and
discuss some of the interesting aspects and future areas of work. Also, the research
questions will also be answered.
In this work, a time dynamic simulation model of section 406 at Mogalakwena North
Concentrator has been developed, tuned, validated. The model has also been used
to test the performance of the current control setup and a new more advanced con-
troller. The new controller has been compared to the current circuit configuration.
The new controller was after that compared against a baseline operating point of
the current circuit and when applying changes to the circuit in the form of changing
apertures of the screens. The results show a possible upside in circuit performance
if an MPC controller is added to today’s control solution. The use of time dynamic
models have proven useful and promising for the development of controllers, test-
ing circuit configurations, tuning controllers and evaluating their performance over
time.
5.1 The research questions
The following section will answer the research questions stated in section 1.3, the
questions are answered one by one and stated before each answer.
5.1.1 Research question 1
• What type of model characteristics are required for time dynamic simulations?
The models used in time dynamic simulations can be of steady state type, however
if the equipment or process shows signs of a time evolving response and the previous
state of operation matters for the future predictions then time varying components
should be included in the model. This holds true especially for the HPGR crusher
with its hydraulic system. The models needs to be fast and must avoid iterative
calculations that could run into convergence problems as well as instability of the
model. Time dynamic simulation models of the type built in this work easily be-
comes complex systems and therefore it is a big advantage in keeping the models
simple and rather introduce an extra tuning factor than an extra integral.
49 |
Chalmers University of Technology | 5. Conclusion
5.1.2 Research question 2
• How can large variation and uncertainties in incoming feed and machine wear
be handled in order to increase robustness of control system performance?
The performance of a control system, especially an MPC controller depends on how
well it’s tuned. There are many methods for tuning regular PID controllers. MPC
controller on the other hand requires knowledge about the response of the controller
for a change in a certain parameter, there are rules of thumb and how to adjust
for example the R and Q matrices for smoother or more aggressive response. The
key to successful tuning lies in having good knowledge of both the controller and
the process. It will in most cases require some trial and error approach and it is
therefore great to have a simulation model to test the controller with.
The effect of variations and can be handled, depending on the variation by esti-
mation of disturbances or on-line estimation of parameters, such as the split ratio
at the screen. Using for example an on-line particle size analyzer to signal to the
control system if there is more or less material that is going to be circulated in the
near future. Handling machine wear can be done in a similar manner, by estimation
of critical parameters, logging maintenance. Increased understanding of the process
and thereby knowledge of the effect of wear on the process could potentially increase
the possibility of choosing the right parameters to tune. One advantage of having
access to a calibrated process model is the possibility to test at what stage the con-
troller becomes unable to control the process adequately, this was not tested in this
work but is certainly interesting for future work.
5.1.3 Research question 3
• How can model predictive control be applied to a crushing circuit simulation?
A controller for a flow based system can be established by known techniques and be
tested in a simulation environment, such as Simulink. With the software FORCES
pro this can be achieved without having to write custom made code for the appli-
cation.
Supervisory controllers are usually executed less frequently than a regular PID con-
troller, anditisthereforeneededtobeabletohandledifferentsamplingtimeswithin
the system. This can be done in MATLAB using a structure that is triggered. The
controller sampling time needs to be an even multiple of the global sampling time.
The first stage of applying this type of controller to a crushing circuit is to be able
to handle the flow. In future versions, the controller should be able to track and
simulate of the bin content and have more adaptive behavior in regards to the qual-
ity of the crushing. Using parametric state space models where the coefficients can
be changed online means that there is an opportunity to adapt the behavior of the
controller if for example the output product size is decreasing or increasing because
of changes in the ore.
50 |
Chalmers University of Technology | 5. Conclusion
5.2 Discussion
The primary task of calibrating the dynamic model to correspond well to the pro-
cess data was a very time intensive task and the framework used for the calibration
process was presented in Chapter 3. For future modeling exercises including process
calibration, this list will be utilized and along with new ideas of how to complete
this process more time efficient. It is likely that the need for these kinds of models
will be in high demand in the future and a clear framework and set of reference
points would be very helpful.
The model calibration has been done on a data set of 36 hours, and one set of
particle size data. The dynamic calibration is believed to be up to standards. The
prediction of particles size has been calibrated on one data set which is a risk in
terms of only checking circuit correspondence at one point. This should be included
in future work to address this issue and obtain data from the circuit under different
operating conditions to strengthen the calibration result.
Modelpredictivecontrolisacontrolmethodthathasswarmedprocessindustrysince
the beginning of the 1970s, and there is no incitement that it should not continue
to grow, the application areas within minerals processing are endless and the actual
use of more effective and optimal systems is beneficial for both the environment,
companies and the workers in the long run. Platforms for development and testing
of new controllers without impacting the real process will allow for a better chance
of deploying a successful new strategy. In terms of developing control schemes and
completing initial tuning of controllers, the use of the time dynamic model is great.
The non modeled advanced controller to select the set point for the screen bin feed-
ers may not be needed when using an MPC, the target value for the bin level can be
set low and the potential of over filling the bin avoided. Over filling can happen in
the simulation without consequence, however, on the real plant, the control engineer
who is responsible will have to decide on how much redundancy to consider in the
system.
The MPC had problems finding optimal solutions at the beginning of the simula-
tion; this is thought to be due to the observer setup. The observer needs to run for
many iterations before it is fully initialized. By only starting the observer and let-
ting the circuit be brought up to an idling operating point before letting the MPC
start to supply set points should solve the feasibility problem. In an actual real
implementation of an advanced controller, a security layer is used to handle these
kinds of problems. This layer will also act as a back up if the MPC is unable to find
a solution within the time constraint, in this case, 10 seconds.
DuringthecommissioningoftheMogalakwenaNorthConcentratorthebasiccontrol
layer was initially designed, and in the case of the HPGR circuit, the control loops
around the chute feeding arrangement have been updated once. Tuning of the cur-
rent control set up could potentially increase the benefit of the circuit and improve
its performance. These controllers work well in their current configuration. However
a speed up of for example the start up sequence could probably be achieved. This
would lower the response time during start-up, and the response of a roller speed
51 |
Chalmers University of Technology | 5. Conclusion
set-point change improved in a way that could potentially benefit the circuit.
The results from comparing different screen decks look promising, the potential
benefit in the downstream milling circuit from a decrease in particle size should be
investigated. The results for screen deck 6, 8 and 10 mm looks fairly good, the
12mm seems to be a very large step in throughput, and this could potentially be
an effect of only using one data set for calibrations of the PSD prediction. An-
other possible explanation might be that the fresh feed has been too sharp causing
the crusher product to become bimodal. This should also be investigated to be sure
that the results used for potential future decision making are as accurate as possible.
5.3 Future work
This work has spanned many areas, and there are therefore also many extensions
and ideas to work with in the future. They are described in this section in a similar
structure, split up into modeling and control.
The method used to predict the particle size in the current model should be updated
and compared to other methods used in the research community, and it should be
investigated if there is a way to adapt the particle size prediction test and method-
ology by Evertsson [15] to describe the HPGR as accurate as the cone crusher.
The hydraulic system of the HPGR is an interesting component that should be sub-
ject to future modeling and include a pressure model to feedback into the model
as a pressure component. The Mogalakwena North HPGR has an active dampen-
ing system. Modeling and understanding the hydraulic system may eventually lead
to an opportunity to integrate the particle size model to operate the machines hy-
draulic system to achieve better comminution and ultimately increase the machine
utilization.
An advanced controller should contain state of the art models for the application
and support the possibility to update the model depending on the movement of the
process, the tool used for the controller development allows for the parameters of the
model to be updated for each stage of the optimization and each prediction stage.
A controller that uses the model to linearize the process for each time instance.
Also possibly using the future predictions of the controller to linearize around the
predicted operating point. Such an approach will create a very adaptive control
scheme that will be, given a calibrated model, very responsive to changes in the
process. Additional work should be pointed towards concluding research to include
particle size, in other words, product quality into the control system and explore the
opportunities that arise by being able to control and keep track of process quality
in greater detail.
The relationship between a successful controller and the model calibration is an
interesting relationship. A correct understanding of this relationship could poten-
tially save time and the effort spent on doing model calibration set in parity with
the results expected from the controller and controller tuning.
52 |
Chalmers University of Technology | A
Pressure response experiments
A.1 Presentation of the pressure response results
for the Mogalakwena ore.
Thepressureresponseexperimentsconsistedofsixtest, T1toT6wheretwodifferent
top sizes were compressed. For each top size, the width was varied in three steps,
a mono size and a steep and a wide distribution. The distributions are shown in
Figure A.2. Each distribution was prepared by hand by mixing different amounts
of the sieve size classes to obtain the size distributions. The sample was mixed, and
the placed in the piston and die. The piston and die have a diameter of 100mm,
and the target bed height was 68 mm. Each sample was compressed to 150[MPa],
and the response was recorded. Table A.1 shows the parameters calculated from
the fitting to the exponential function along with data regarding particle size and
width of the distributions for each test. The resulting experiment output data and
the models are graphed in Figure A.1
Table A.1: Calibration of the force response model from the material
a b c d x σ
50 n
T1 1,845 6,4230 0,0004 23,828 39,100 0
T2 9,930e-06 34,1187 1,8321 8,1587 28,00 0,295
T3 1,2767 10,6092 0,00087 28,8752 17,700 0,6625
T4 1,7058 6,5781 0,0003869 25,2013 26,800 0
T5 0,000510 26,73268 1,4148 8,29135 20,90 0,4444
T6 0,00014481 34,2038 1,4792 11,1912 14,300 0,5594
I |
Chalmers University of Technology | B. MPC setup
[tph] or [tph/10 s].
x = 0 ≤ x ≤ 5000 = x [tph]
1−64,min 1−64 1−64,max
x = 0 ≤ x ≤ 200 = x [tons]
65−67,min 65−7 65−67,max
u = 280 ≤ u ≤ 2780 = u [tph]
1,min 1 1,max
u = 920 ≤ u ≤ 3100 = u [tph]
2,min 2 2,max
u = 0 ≤ u ≤ 3100 = u [tph] (B.2)
3,min 3 3,max
∆u = −30 ≤ ∆u ≤ 30 = ∆u [tph/10s]
1,min 1 1,max
∆u = −15 ≤ ∆u ≤ 15 = ∆u [tph/10s]
2,min 2 2,max
∆u = −30 ≤ ∆u ≤ 30 = ∆u [tph/10s]
3,min 3 3,max
The controller tuning was shifted slightly depending on the simulation case. The
original philosophy was to penalize change in control action and make the inflow set
point affordable and the HPGR and the screen bin more expensive to change. The
weighting on keeping the bin set-points was kept to one. No penalization on high
control signals was implemented since in a maximization problem we would like as
high control signals as possible without violating the constraints.
The matrix H in the control objective in Equation 3.15 consists of the entries; R
and Q.
R 0 0
H = 0 Q 0 (B.3)
0 0 0
R penalizing the change in the control signals and Q the deviation of the states from
the set-points. In this case, Q only have non-zero entries at position (67,67) and
(68,68) which is the screen and HPGR bin level states.
The first entry in R was 100 times smaller than the two remaining diagonal entries,
implying that the change of the inflow is less penalizing than that of the screen
feeders and the HPGR set-point.
The linear term f in the objective, which is a vector and have negative entries on
position (66,1), (67,1) and (64,1). For the set points to be correct, the two set-points
for the bins are multiplied by 2.
IV |
Chalmers University of Technology | Electrification of the heat treatment process for iron ore pelletization at LKAB
ERIK LINDÉN
EMIL THUREBORN
Department of Space, Earth and Environment
Division of Energy Technology
Chalmers University of Technology
Abstract
LKAB is a Swedish state-owned mining company that extracts and refines iron ore for the
global steel industry. Their main product is iron ore pellets which accounts for around
83 % of LKAB’s iron ore products. The straight-grate process is used for heat treatment
of the pellets and is the most energy-intensive part of the refining process. The heat is
supplied by fossil fuel burners with considerable emissions of greenhouse gases and pollu-
tants such as nitrogen oxides, which should be reduced to comply with emission targets
set by LKAB as well as the Swedish government.
This project investigates electrical heating alternatives in the form of plasma torches and
microwaves to replace the fossil fuel burners, providing potential for a CO -neutral pro-
2
duction process. The work focus on how process conditions are affected when switching to
an electric heat source. Process performance is evaluated through product quality, energy
efficiency, emissions of CO and NO . Furthermore, new potential process optimization
2 x
measures as a result of the implementation of electric heating are discussed. A previously
established process model of the straight-grate process was used to establish a reference
case and several new cases where modifications were made to simulate the implementa-
tion of electric heating. NO emissions from plasma torches were studied by reaction
x
modelling in Chemkin and results were evaluated against data from previous practical
experiments.
Results showed that electric heating through microwaves may supply energy at low tem-
perature to the drying process, which would allow for a more compact drying zone and
replacement of up to 1 MW of fossil fuels. However, the total power demand of the
process increased by 15 - 20 %. To supply heat at the high temperature required in the
firing zones, plasma torches have the potential to replace the entire fossil fuel demand
and achieve a CO neutral process. The implementation of plasma torches only had a
2
slight effect on pellet quality and energy efficiency. Simulations regarding NO emissions
x
from plasma torches when mixing hot plasma gases with air shows that the formation of
NO may be higher than from a fossil fuel burner. The most important factor for de-
x
termining the NO emissions is the gas residence time at high temperatures. Reburning
x
using small amounts of natural gas was the most efficient NO reduction strategy with
x
NO reduction of up to 65 %. Future work should be directed towards financial analyses
x
of the implementation of electric heating and experimental tests to prepare for practical
implementation.
Keywords: Electrification, heat treatment, iron ore, straight-grate, pelletization,
microwaves, plasma torches, nitrogen oxides, LKAB.
iv |
Chalmers University of Technology | 1
Introduction
1.1 Background
Someofthemajorchallengesformodernsocietyareglobalwarmingandthevastemissions
of greenhouse gases and pollutants such as nitrogen oxides (NO ). In the 2015 Paris
x
climate conference, 195 nations agreed on guidelines for managing climate change. The
nations decided to take action to keep the global temperature rise below 2 °C and pursue
efforts to limit it to 1.5 °C. [1] In order to achieve this goal, Sweden has a vision of having
zero net emissions of greenhouse gases to the atmosphere in the year 2050 as well as
having a sustainable and resource-efficient energy supply [2]. International regulation is
also an important tool in order to reduce NO emissions. According to the 2016 EU air
x
pollution directive, NO emissions should be reduced by 42% until 2020 and by 63% until
x
2030 compared to the emission levels of 2015 [3].
LKAB (Luossavaara-Kiirunavaara Aktiebolag) is a Swedish state-owned mining company
that mines and refines iron ore for the global steel market. The company was founded in
1890 and currently has over 4000 employees. LKAB mines around 80 % of all iron ore
in the EU, making it the largest iron ore producer in Europe. In 2017, LKAB produced
over 27.2 million ton iron ore products and iron ore pellets accounted for around 83 %
of LKAB’s iron ore deliveries. [4] As one of the largest industries in Sweden, operating
several pellet plants, LKAB contributes significantly to greenhouse gas emissions. Due
to this, LKAB is examining different methods to reduce their climate impact. LKAB’s
current goal is to reduce the emissions of carbon dioxide by 12 % per ton finished product
until the year 2021, while simultaneously reducing the energy intensity by 17 % compared
to 2015 levels [5]. HYBRIT is an initiative by the companies LKAB, SSAB and Vattenfall
with the goal of producing the world’s first fossil-free steel. The project aims to reduce
CO emissions throughout the entire steel production chain, where iron ore pelletization
2
is an important part. [6]
1 |
Chalmers University of Technology | 1. Introduction
LKAB’s mines are located in the ore fields in the north of Sweden, mainly in Kiruna,
Malmberget and Svappavaara. The ore deposits are located several hundred meters below
ground and the iron ore is extracted in large underground mines. [7] The two main types
of mineral found in the deposits are magnetite (Fe O ) and hematite (Fe O ). The iron
3 4 2 3
ore is then processed in processing plants, where the ore is sorted and concentrated in
several steps before being pelletized. The pelletization process involves forming the iron
ore into small round balls called green pellets and subjecting these to a heat treatment
process that oxidizes and sinters the pellets. The finished pellets are then transported by
rail to the ports and shipped to customers all around the world for steel production. [8]
The focus of this project is the heat treatment of the iron ore pellets which is the most
energy intensive part of the pelletization process. The two most common processes for
heat treatment are called the grate-kiln and straight-grate processes. The grate-kiln
process uses a travelling grate for drying, preheating and cooling and a rotating kiln
for sintering, while the straight-grate uses a travelling grate for the entire process. The
grate-kiln process is more suitable for pellets with a high magnetite content and achieves a
more homogeneous temperature profile which leads to higher pellet quality. The straight-
grate process can handle ores with higher hematite content and also achieves a lower fuel
consumption than the grate-kiln process. [9]
Currently, fossil fuel burners are used in order to heat up the iron ore to the required
temperature. A considerable amount of heat is also supplied from the iron ore itself
through an exothermic oxidation reaction. In order to reduce CO -emissions and lower
2
the running costs, LKAB has shown interest in investigating the effects of replacing the
current burners with electric heating technologies such as plasma torches and microwaves.
Thisleadstocertainchangesintheprocessconditionswhichmightaffectthequalityofthe
finished pellets. Another potential problem is that plasma torches operate at extremely
high temperatures which leads to the formation of NO .
x
This project is a part of a collaboration between LKAB and the Division of Energy
Technology at Chalmers University of Technology. During the collaboration, a simulation
model has been developed and experimental tests have been performed in order to verify
the model. So far most of the research has been towards improving the energy efficiency
of the processes. This project builds on previous work and adapts it for operation with
electricheating. Furthermore,theNO emissionsfromplasmatorchesarestudiedthrough
x
reaction modelling in Chemkin.
2 |
Chalmers University of Technology | 1. Introduction
1.2 Aim
The aim of this project is to make an assessment of the possibilities for utilizing electricity
to supply the required energy for the heat treatment in the iron ore pelletization process
at LKAB. More specifically the work will focus on:
• Supplyingthehigh-temperatureheatrequirementusingplasmatorchesinthestraight-
grate process
• Finding possible application areas where microwaves could provide low-temperature
heat input to the straight-grate process
• Evaluating how the implementation of electric heating affects process conditions,
energy efficiency, emissions (CO and NO ) and quality of the finished pellets
2 x
• Finding new potential for process optimization as a result of the implementation of
electric heating
1.3 Limitations
The heat treatment in the straight-grate process is dominated by convective heat transfer
between heated gases and the iron ore pellets while the grate-kiln process receives an
important contribution to the heat treatment via radiation from a large open flame. The
suggested electric heating alternatives do not provide radiation properties similar to fossil
fuel flames. Therefore, this report will focus on the straight-grate process since it is most
suited for the first assessment of electric heating implementation. The other limitations
of this project include:
• The only electric heating technologies that will be studied are plasma torches and
microwaves
• The theoretical potential of replacing fossil fuel burners with electric heating will
be investigated but the practical aspects of implementation will not be studied
• No cost evaluations or in-depth economic analysis of the suggested changes to the
straight-grate process will be performed
3 |
Chalmers University of Technology | 2
Theory
2.1 Iron ore processing
Iron ore deposits are found with varying concentrations of the two main minerals mag-
netite and hematite. In the LKAB mines, magnetite is the dominating mineral with
content in the range of 80-100 % [10]. When extracting iron ore from the mines, holes are
drilled in the ore body which are then filled with explosives. A blast is initiated which
separates smaller pieces of ore from the main ore body. The ore contains waste rock and
impurities which must be removed to produce high quality iron ore pellets. The first step
in the refining process is to pass the ore through crushers to reduce the size of the ore
pieces to around 10 cm in diameter. [7]
The crushed ore is then transported to a sorting process where the ore is separated from
the waste rock. The iron ore can be separated from waste rock using magnetic separators
due to the magnetic properties of magnetite ore. The ore passes through the sorting
process several times. Between every passing, the ore is crushed into even smaller pieces.
The iron ore enters the sorting process with an iron content of around 45 % and leaves
at around 62 %. [8] The separated ore needs further removal of impurities to provide a
high quality end product. Therefore, the ore enters the concentration process where it is
ground into fine particles. The finely ground iron ore is then mixed with water to form
a slurry. The iron content is increased to around 68 % during this process. Different
additives are then introduced to the slurry depending on the product specification and
pellet type that is to be produced. The final step in the concentration process is to reduce
the water content of the slurry by filtering. The slurry is then ready for pelletizing.
A clay mineral additive, bentonite, is added to the slurry at the pelletizing station. Ben-
tonite is a binder which allows the slurry to form round balls called green pellets. These
areapproximately10mmindiameterandareformedbyrollingtheslurryinlargerotating
drums. The pellets are then heat treated to reduce the moisture content and oxidize the
magnetite to hematite. The reason for this is to produce a high quality product as well as
to increase the strength of the pellets so that they can withstand the stresses imposed on
them during transportation from LKAB’s facilities to the customers. The heat treatment
consists of three main steps executed in the following order; drying/pre-heating, sintering
and cooling. The two most common processes for the heat treatment of iron ore pellets
are the straight-grate process and the grate-kiln process, described in Section 2.2.1 and
2.2.2 respectively.
4 |
Chalmers University of Technology | 2. Theory
The finished pellets are mainly used for steel production in blast furnaces or through
direct-reduction. Theseprocessestypicallyrequireironoreinaformthatallowsformation
of a bed through which gas can flow with low resistance and this is the reason for shaping
the iron ore into spherical pellets. LKAB is not directly involved in the steel production
but a brief description of these processes are given here in order to explain how the pellets
are used.
A blast furnace is a vertical shaft furnace that produces liquid metals by the reaction of
a mixture of metallic ore, coke and limestone introduced from the top with a flow of hot
air introduced from the bottom. Blast furnaces produce crude iron from iron ore pellets
by a reduction process under elevated temperature where the iron oxides are reduced by
carbon monoxide into its elemental form. [11] Crude iron contains a high proportion of
carbon, typically around 4 %, and can be further processed into steel by reducing the
carbon content and adding different alloys that give the steel its unique properties [12].
The direct-reduction process uses a similar arrangement as the blast furnace, where iron
ores are fed from the top and a reducing gas is fed from the bottom. The difference is
that the reduction process occurs in solid phase at temperatures below the melting point
(800 - 1200 °C). The reducing gas is usually syngas which is a mixture between hydrogen
gas and carbon monoxide that can be produced from natural gas. The direct-reduction
process is very energy efficient and requires significantly less fuel than a traditional blast
furnace. The end product from the process is direct-reduced iron (DRI), so called sponge
iron. This is often directly processed to steel in an electric arc furnace (EAF) to take
advantage of the heat from the reduction process. [13]
5 |
Chalmers University of Technology | 2. Theory
2.2 Pelletization heat treatment processes
Thissectiondescribestheheattreatmentwhichisamajorpartofthepelletizationprocess.
The straight-grate and the grate-kiln processes are both used by LKAB to provide heat
treatment to the iron ore pellets. The working principles of these processes and the
chemical reactions taking place in the pellets are explained in this section.
2.2.1 Straight-grate process
Thestraight-grateprocessconsistsofacontinuouslymovinggratewithapelletbedresting
on the grate, with no mixing occurring in the pellet bed. The straight-grate system
performs the drying, firing and cooling in the same machine. A schematic illustration of
the straight-grate process based on LKAB’s MK3 unit is presented in Figure 2.1. The
straight-grate consists of several different process zones called updraft drying (UDD),
downdraft drying (DDD), preheating (PH), firing (F), after-firing (AF) and cooling zones
(C1 and C2). As can be observed, the hot process gas is recycled to increase the energy
efficiency of the process while the exhaust gases leaves the process through a stack.
Figure 2.1: Schematic illustration of the straight-grate process.
The green pellets are introduced on top of a layer of already treated pellets called the
hearth layer, which has the purpose of protecting the grate from thermal stresses when it
is travelling through the firing zones. In the first two zones (UDD and DDD), the pellets
are dried by a cross flow of recirculated process gas. In the UDD zone the gas from the
C2 zone flows though the bed from below and in the DDD zone the gas from the AF zone
flows from above. The reason for switching from updraft to downdraft drying is to achieve
a more homogeneous evaporation rate and reduce the risk for recondensation of water in
the bottom layer. [14] Moderate gas temperatures are used in the drying zones since too
high temperatures will cause the pellets to dry too fast, eventually causing cracking of
the pellets due to the build up of pressure generated by steam inside the pellets [10].
6 |
Chalmers University of Technology | 2. Theory
The PH, F and AF zones are called the firing zones and the main purpose of these zones
is to increase the temperature for the sintering process. Sintering is a diffusion-driven
process where the particles in the pellets partly fuse together, increasing their mechanical
strength. Oxidation of magnetite into hematite occur simultaneously with the sintering
process. The oxidation is an exothermal reaction which means that it releases heat.
Approximately 60% of the thermal energy needed for pellet production comes from the
oxidationofmagnetite, whichreducestheneedforadditionofexternalfuel[10]. InthePH
zone, most of the water content has been evaporated and the pellet temperature begins
to increase. The majority of the oxidation takes place in the F and AF zones under high
temperatures.
All of the firing zones are supplied by recirculated gas from the C1 zone which needs to be
heated even further to 1000 - 1300 °C. In the current MK3 unit, 4 natural gas burners and
12 oil burners are used for the additional energy supply. These are located in downcomer
pipes on the sides of the pellet bed. The hot gas then flows in a downdraft direction
through the bed. The exhaust gas from the firing zones has a reduced oxygen content
since oxygen is consumed during the combustion and oxidation reactions.
Thehotpelletscomingfromthefiringzonesthenneedstobecooleddowntobelow100°C
to facilitate product transport. This is achieved in the cooling zones by blowing ambient
air in an updraft direction through the bed. The gas flow is heated in the cooling zone
and distributed throughout the process as a heat carrier. The magnetite may not be fully
oxidized when entering the cooling zone meaning that some oxidation may take place in
the beginning of the cooling zone as well. [14]
2.2.2 Grate-kiln process
The grate-kiln process is a widely used heat treatment process alongside the straight-
grate process. The grate-kiln process consists of three distinct sections, the drying and
preheating section, the rotating kiln and the cooling section, as can be seen in Figure 2.2.
Figure 2.2: Schematic illustration of the grate-kiln process.
7 |
Chalmers University of Technology | 2. Theory
The green pellets first enter the drying section which is a travelling grate where the pellets
aredriedandpreheatedusingrecirculatedcoolingairandcombustiongases, similartothe
straight-grate process. The pellets then enter the rotating kiln where they are sintered
at high temperature, up to 1300 °C. The rotational movement of the kiln mixes the
pellets and thereby provides an even temperature distribution in the pellet bed. This
leads to more homogeneous sintering and results in a higher quality product compared to
the straight-grate process. Coal is the primary fuel used for heat generation. The heat
is transferred to the pellets via one large flame inside the rotating kiln, burning in the
opposite direction of the pellet flow. The radiation from the flame provides an important
contribution to the heat transfer in the kiln. The pellets then leave the rotary kiln and
enter the cooling section. The cooling unit is a circular container rotating along a vertical
axis. Figure2.2showsasimplifiedsketchofthecoolingunitwhereitlookslikeatravelling
grate, which is not the case in reality. The pellets travel one lap in the rotating cooler
while being cooled with ambient air flowing through the bed. The pellets are then ready
for storage and transportation to customers.
The grate-kiln process is best suited for iron ore with high magnetite content since
hematite ore produces more fines when mixed in the rotating kiln and gives larger ra-
diation losses [15]. The complexity of implementing plasma torches in the grate-kiln
process lies in replacing the large flame and the radiation heat transfer contribution that
it provides since a plasma flame does not provide the same radiation properties. It is
therefore likely that large process modifications must be made to implement the plasma
torches and maintain sufficiently good pellet quality. In the straight-grate process the ma-
jority of the heat transfer to the pellets occur via convection, which means that changing
the energy source has a larger potential for success with smaller process modifications.
Since the implementation of plasma torches is more complex in the grate-kiln process
compared to the straight-grate process, the straight-grate process is better suited for a
first assessment of the implementation of electric heating. This is why the grate-kiln
process is not the focus of this report. If implementation of electric heating is successful
in the straight grate process, further studies will most likely focus on converting the
grate-kiln process to electric heating as well.
8 |
Chalmers University of Technology | 2. Theory
2.2.3 Heat treatment chemistry
There are four main reactions that are of importance for the iron ore pelletization pro-
cess; evaporation of water, oxidation of magnetite, calcination of calcium carbonate and
decomposition of dolomite. In the beginning of the process, liquid water is trapped inside
the pellets and the moisture content is around 9 wt%. The water evaporates when the
temperatureisincreasedthroughthedryingandpreheatingzones. Assumingatmospheric
pressure, the enthalpy of evaporation is 2256 kJ/kg. [15]
Oxidation of magnetite is the most important reaction in the heat treatment of iron ore
pellets. In this reaction, magnetite in the pellets reacts with oxygen from the air to form
hematite, see Reaction 2.1. The oxidation reaction is exothermic and the enthalpy of
reaction is -119 kJ/mol. The reaction starts at a temperature of 200 - 300 °C but does
not reach maximum conversion efficiency until around 1100 °C [16].
4 Fe O (s)+O (g) → 6 Fe O (s) (2.1)
3 4 2 2 3
The reaction will undergo two stages before the final stable hematite structure is formed.
In the first stage, the reaction is governed by kinetics and a growing hematite shell will
form around a magnetite core. In the second stage, the reaction is governed by mass
transferandoxygenwillbegintodiffuseinwardtocompletethereaction. Ifatemperature
of 1200 °C is exceeded the oxidation rate drops and dissociation of hematite back to
magnetite will occur. [17]
Calcium carbonate from limestone additives reacts into calcium oxide and carbon dioxide,
as described by Reaction 2.2. The calcination reaction is endothermic, and the enthalpy
of reaction is +182 kJ/mol. Due to the small amount of limestone in the pellets, the total
heat demand of the reaction is relatively small. The reaction starts at around 600 °C and
reaches full conversion efficiency at around 900 °C [18].
CaCO (s) → CaO(s)+CO (g) (2.2)
3 2
Dolomiteisanotheradditiveandwilldecomposeintocalciumoxide, magnesiumoxideand
carbon dioxide as described by Reaction 2.3. This reaction is also endothermic, having
a reaction enthalpy of +296 kJ/mol. Similarly to calcium carbonate, the concentration
of dolomite is low resulting in a small heat demand. The reaction normally occurs at
temperatures between 600 - 760 °C [19].
CaMg(CO) (s) → CaO(s)+MgO(s)+2 CO (g) (2.3)
3 2
9 |
Chalmers University of Technology | 2. Theory
2.3 Microwave heating
Microwavesconsistofnon-ionizingelectromagneticradiationwithinthefrequencyinterval
of 300 MHz to 300 GHz. Electromagnetic radiation consists of an electric field and a
magnetic field which oscillate in a wave motion as they travel through space. Some
materialsabsorbtheelectromagneticenergy, convertingittothermalenergy. Thisprocess
is commonly used to heat food in microwave ovens. The following sections describe the
process of heating materials using microwaves and evaluates possible applications for the
straight-grate pelletizing process.
2.3.1 Microwave interaction with matter
Materials behave in three principal ways when in contact with microwaves. The in-
coming microwaves are either reflected, absorbed or transmitted through the material
as illustrated in Figure 2.3. Materials can show one dominant mode of interaction or a
combination of the three, depending on the material properties. [20]
Figure 2.3: Illustration of microwaves interacting with matter.
Transmissive materials are invisible to microwave radiation and have no affect on the
propagating electromagnetic wave. Such materials are often used for supporting functions
in a microwave heating context such as physically holding the object to be heated in place.
Materials which reflect microwaves are often used for guiding the microwaves in desired
direction and protecting the surroundings. Materials with good conductive properties,
such as metals, are often used for this purpose. Materials that absorb electromagnetic
energy are the only ones which are able convert it into thermal energy [20].
It should be noted that a material can show different properties in an electromagnetic
field depending on its particle size. Metals for example, show reflective properties when
present in bulk but do not show the same behaviour in powder form. For example,
microwaves have successfully been used for sintering of metal powder [21]. Materials with
reflective properties can create electrical discharge in the form of arcing when subjected
to microwave radiation. Under certain conditions, electrons can become concentrated at
the surface and edges of the metal due to the penetration of electromagnetic waves. At a
certain point, the energy is discharged in the form of an electric arc. [22]
10 |
Chalmers University of Technology | 2. Theory
Magnetite, which is the main component of the green pellets produced at LKAB, shows
good properties for absorption of microwave radiation [20]. The green pellets consist of
about 10% water which also is a good microwave absorber. This provides good prereq-
uisites for heating these pellets using microwaves. Hematite is also a good absorber of
microwaves which means that the properties for heating using microwaves are still good
after drying and conversion to hematite. However, arcing has been observed at high
temperatures which could lead to equipment malfunction [20].
2.3.2 Heating mechanisms and properties
The mechanisms of heating by microwave radiation and conventional heating differ sig-
nificantly. In conventional heating the thermal energy is transferred to the surface of the
material and spreads inwards by the mechanisms of radiation, convection and conduction.
When using microwaves, heat is generated within the material, resulting in rapid heating.
This is illustrated in Figure 2.4.
Figure 2.4: Heating a homogeneous particle by (a) Convective/conductive heating, (b)
Microwave heating.
Since microwave heating is selective, non-homogeneous materials can heat unevenly. The
heat distribution in the volume depends on the dielectric and conductive properties of
the materials in the volume. In some cases certain areas can achieve significantly higher
temperatures than others. This is referred to as thermal runaway [20]. The thermal
expansion properties of different materials in a particle can cause micro cracks, especially
in a rapid heating process with large local variations such as thermal runaway [23].
The conversion from electromagnetic radiation into thermal energy is a process where the
dielectric properties of the material is of great importance. Dielectric materials containing
polarized molecules will respond to the oscillating electrical component of the electromag-
netic radiation. The water molecule is a typical example of a polarized molecule which
reacts to changes in the electric field surrounding it. The polarized molecules align them-
selves to the electric field, causing increased motion of the molecules. The increased
motion leads to increased kinetic energy of the molecules which is then converted into
thermal energy through friction and collisions with other molecules. The thermal energy
then spreads throughout the material by conduction, driven by thermal gradients. [20]
11 |
Chalmers University of Technology | 2. Theory
The dielectric heating is also connected to the ionic conduction properties of the material.
The ionic conduction mechanism is driven by mobile charge carriers, such as electrons
and ions, moving inside the material as a response to the oscillating electrical field. This
creates an electrical current through the material. The materials resistance to electric
conduction causes the material to heat. [24] The magnetic component of the electromag-
netic field contributes to heating by eddy current loss and hysteresis loss which provides
a considerable contribution to the heating of ferrous materials [24].
2.3.3 Previous microwave applications for iron ore pelletization
Studies have been performed on the implementation of microwave technology in iron ore
pelletization. Two studies performed by Maycon Athayde et al. [23] [25] has investi-
gated effects of implementing microwave technology in the drying section of the iron ore
pelletization process.
In early 2018, there was a study published by Athayde et al. [23] investigating kinetic
parameters of the iron ore pellet drying process assisted by microwave technology. In this
study, green pellets were produced from hematite ore and sieved into three size categories
with average diameters of 10.75, 13.5 and 15.25 mm having a moisture content of 10 %.
Samples of 100 g were subjected to microwave radiation for 180 s with intervals of 30
s using a turnable-tray microwave oven normally used for heating food. The frequency
used was 2.45 GHz and the power levels used were 300, 600 and 1000 W. The tests were
conducted at temperatures lower than 500 °C.
Results showed that the dryout time was much faster when using microwaves instead
of convective heating. The drying activation energy was halved when using microwaves
compared to convective heating. When drying the pellets, an intense heating rate was
observed in the beginning of the drying phase. The heating rate slowed down as the
moisture content of the pellets decreased which indicates that the moisture content is
an important parameter for the ability of the pellets to absorb microwave energy. The
dryout time was slower in small pellets than in large pellets. This was likely due to the
large surface area to volume ratio which leads to large heat loss to the surroundings.
Effects on pellet quality was assessed by evaluating the crushing strength of the pellets
after drying with microwaves. Results showed strength levels varying between 0.6-2.5
kg/pellet which is considerably lower than conventional methods. Pellets gently dried
in a conventional oven at 105 °C showed a crushing strength of 5.2 kg/pellet. The loss
in strength was likely due to micro cracks found in the hematite and goethite, which is
another iron oxide found in these pellets. Cracks were observed in the mantle as well as
the core of the pellets. Differences in heating rate and linear expansion of the different
materials in the pellet was thought to cause these cracks.
Observations of the pellet structure using a scanning electron microscope (SEM) revealed
hematitereductiontomagnetite. Thisreductionoccursattemperaturesmuchhigherthan
the 500 °C that was supposed to be the upper temperature limit of the experiments. This
was considered as thermal runaway, difficult to detect with the temperature measuring
equipment used.
12 |
Chalmers University of Technology | 2. Theory
Later in 2018, another study was published by Athayde et al. [25]. This time evaluating a
noveldryingprocessassistedbymicrowavetechnologyforironorepelletization. Theeffect
of combining microwaves and traditional convective heating was evaluated against the
traditional processing technique using only convective heating. The tests were performed
usingapotgrateinlabscalesimulatingatravellinggratefurnace. Thepotgratesimulates
thedryingprocessofthestraight-grateinitiatingtheprocesswithanUDDsectionfollowed
by a DDD section. The UDD mode introduced an airflow with an inlet temperature of
290 °C during 175 seconds before switching to the DDD mode which introduced an airflow
with an inlet temperature of 280 °C during 340 seconds. The microwaves were used only
during the UDD operation mode at a frequency of 915 MHz and a constant power supply
of 10 kW. Green pellets were produced using iron ore dominated by hematite mineral.
The majority of the pellets had a diameter between 9-16 mm. The moisture content was
adjusted to batches of 10 % and 10.3 %.
Theresultsshowedarapidheatingprocesswhenmicrowaveswereusedcomparedtopurely
convective heating. Maximum heating rates of 1.5 °C/s were observed with an average
of 0.79 °C/s while purely convective heating showed an average heating rate of only 0.14
°C/s. The surface layers of the pellet bed showed considerably higher temperatures when
using microwaves. At 60 mm depth the temperature reached 115 °C compared to 71 °C
in the purely convective case. The lower layers showed no change in temperature for the
two cases. This is believed to be due to low penetration depth of the microwaves. All the
electromagnetic energy was consumed in the higher layers.
Results from moisture measurements in different levels of the bed at the end of the UDD
section showed that purely convective flow increased the moisture content in the upper
layers of the bed by 0.08 - 0.36% compared to initial levels due to recondensation of mois-
ture. The microwave assisted drying process decreased the moisture content in the higher
layers by 0.12 - 0.8% compared to initial levels, which was a considerable improvement.
The lower layers showed similar results for both microwave assisted drying and purely
convective drying, probably due to the low penetration depth of the microwaves.
Thermal imaging of the top layer of pellets in the pot grate showed energy concentrations
in the center of the bed in the microwave assisted drying case. This was caused by the
oscillating behaviour of the microwaves which created areas with higher power density
than others, resulting in local variations in temperature. The pellet samples were checked
for micro cracks which were not found in any sample, implying that the heating rate was
sufficiently low to avoid this problem.
The study showed promising results for using microwaves to assist the convective drying
technique resulting in dryer pellets which are less inclined to cluster, deform and clog
the voids in the pellet bed. This allowed for a lower pressure drop through the bed and
provided the possibility of using higher temperature gas in the DDD stage since the lower
moisture content makes micro-cracks due to steam expansion inside the pellet less likely
to occur.
13 |
Chalmers University of Technology | 2. Theory
2.3.4 Potential for application of microwaves at LKAB
Thissectionevaluatesthepotentialforapplicationofmicrowavesinthepelletizingprocess
at LKAB, using the theoretical knowledge about microwave behaviour in combination
with knowledge about the straight-grate process conditions.
The potential of using microwave energy for the straight-grate process lies in the areas
where heating is required. This would be in the drying zones or the sintering zones.
Microwave heating has shown rapid heating rates, low penetration depth and arcing at
high temperatures in earlier studies [23] [20]. These properties are not beneficial for the
sintering process, but could potentially be a good complement to the convective heating
in the drying zones.
Traditionally, the drying process begins with an updraft zone. The layers which are
subjected to the highest static load, i.e. the lower layers in the bed, are dried first to
increase the mechanical strength of the pellets and thereby reducing the occurrence of
deformation and clogging. Some of the moisture that evaporates in the lower layers of the
bed will recondense in the middle and higher layers as the gas cools down when passing
through the bed, reducing its ability to retain moisture. As a result, the moisture content
of the pellets in the middle and higher layers of the bed can increase to levels above the
initial moisture content. This is acceptable to some extent, but there is a limit where
over-wetting of the pellets results problematic deformation and clogging of pellets. At a
certain point, the flow must switch to downdraft to avoid over-wetting of the middle and
higher layers. This effectively dries the higher layers but the drying process of the middle
and low-middle layers of the bed is slow.
Microwaves could be used in the updraft zone to eliminate the recondensation of moisture
in the higher layers by heating the bed from both sides i.e. convective heating from
below and microwave heating from above. This provides potential for extending the
updraft section and decreasing the size of the downdraft section which would result in a
faster drying process for the low-middle layer and could thereby lead to a faster drying
process overall. A faster drying process with less problems related to deformation and
clogging of pellets is beneficial since it could provide potential for increased production
rate and a higher quality end product. The possibilities of modeling the microwave energy
contribution to the drying process using the straight-grate simulation model is further
investigated in this project.
14 |
Chalmers University of Technology | 2. Theory
2.4 Plasma torches
This section introduces the theory required to understand the operation of industrial
plasma torches. It includes a description of the basic principles of plasmas and plasma
generation, common types of plasma torches and how they can be applied to the straight-
grate process.
2.4.1 Definition of plasma
Plasma is one of the four fundamental states of matter (solid, liquid, gas and plasma).
A plasma can be described as a highly ionized form of gas. Plasmas generally consists
of ions, electrons and neutral species, see Figure 2.5. The positive and negative charges
compensate each other making the overall plasma electrically neutral. [26] Plasmas can
be generated artificially by heating or subjecting a gas to a strong electromagnetic field.
Positively charged ions are then created by separation of electrons from the atomic nuclei
and the ionized gaseous substance becomes increasingly electrically conductive. The high
temperature in combination with high reactivity makes plasma an effective way to achieve
high heat transfer rates and chemical reactions. [27]
Figure 2.5: Simplified illustration of the four fundamental states of matter.
Plasmas can generally be classified in two categories, thermal and non-thermal plasmas.
Thermal plasmas are atmospheric pressure plasmas where the ions and electrons are in
local thermodynamic equilibrium, meaning that they have the same temperature. Non
thermal plasmas, also called cold plasmas, are plasmas where the ions stay at low tem-
perature while the electron temperature is much higher. Plasmas can also be classified
according to the degree of ionization, which can be described by the electron density
(the amount of electrons per unit volume). The degree of ionization depends on the sur-
rounding environment, mainly the temperature and pressure. The plasma used in most
technical applications are thermal plasmas with a relatively low degree of ionization. [26]
15 |
Chalmers University of Technology | 2. Theory
2.4.2 Plasma generation
A plasma generating device is used to convert electrical energy to thermal energy. The
most common way to produce a thermal plasma is to heat a gas using an electric arc.
When an electrical arc is established in the presence of a gas, the gas will be partly ionized
and become electrically conductive [28]. This occurs through a collision process between
electrons from the electric arc and particles of the gas. The electrons are accelerated by
the electric field and acquires kinetic energy that will be transferred to thermal energy
through the collisions.
The electric arc is a self-sustaining discharge with a high current density and a voltage
drop of a few Volt. The arc needs to be stabilized in order to remain in steady state by
creating and maintaining boundary conditions. A plasma torch is a device for stabilizing
the electric arc. Stabilization involves constricting the arc, cooling the outer layers and
defining the path for the arc. Free burning arcs are stabilized by natural convection while
other arcs require external stabilizing mechanisms such as gas or liquid flow stabilization.
[27]
Gas flow stabilization is the most simple stabilization method and involves a flowing
external layer of gas surrounding the arc column. The flow can be either axial or vortex
depending on how the gas is injected. In order to get a stable arc, the gas flow rate and
the electric power of the plasma torch needs to be balanced. The choice of working gas is
mainly based on gas enthalpy, reactivity and cost. The most common working gases used
in plasma torches are argon, nitrogen, helium, air and hydrogen [29]. If a high energy
content is desirable, diatomic substances such as nitrogen or helium should be used due
to the dissociation reaction prior to ionization. If an inert gas atmosphere is required,
the preferred working gas is usually argon. Reactive gases such as hydrogen, oxygen and
nitrogen can be used to provide reducing or oxidizing effects to the plasma. [27]
The primary source of electricity used to generate the plasma can be direct current (DC),
alternating current (AC) and radio frequency (RF). DC is the most common since when
compared to AC, there is usually less flicker and noise, a more stable operation, better
control, lower electrode consumption and lower energy consumption. [29]
2.4.3 Types of plasma torches
A plasma torch produces low temperature plasma, normally between 5 000 and 30 000
K. Most plasma torches consist of two electrodes between which an electric arc burns, a
chamber restricting the gas flow and a section for introduction of the working gas. [30]
Industrial plasma torches normally operate with power levels from 20 kW up to around 8
MW [31].
16 |
Chalmers University of Technology | 2. Theory
There are many classifications of plasma torches depending on the configuration and
geometry. The terms transferred and non-transferred are used to describe the positions
of the electrodes. In a non-transferred plasma torch both of the electrodes are located
inside the torch and the arc is established between them. In a transferred plasma torch,
one of the electrodes is located outside the torch. This is usually the work piece or the
material that needs to be heated and the arc is then established between the torch and
the work piece. A thermal plasma torch is an example of a non-transferred plasma torch
while applications of transferred plasma torches include plasma arc welding and cutting.
[26]
The most widely used plasma torch type is the linear non-transferred DC plasma torch
with gas stabilization. A schematic illustration of this type of plasma torch is presented
in Figure 2.6. In the linear torch design both of the electrodes are situated along the
same line. It consists of the negatively charged internal electrode (cathode) and the
positively charged output electrode (anode). [30] The cathode is usually formed as a
rod while the anode is shaped in the form of a nozzle. The most common electrode
materials are tungsten and copper. Other materials can also be used such as graphite,
steel, molybdenum and silver. [27]
Figure 2.6: Schematic illustration of a linear plasma torch.
The electric arc is ignited between the internal and output electrodes. The closing section
of the arc moves along the channel through the effect of the working gas flow, increasing
the length of the arc. [30] A high velocity and high temperature flame is produced called
plasma jet which can be up to a few inches long. When the gas exits the torch, it
recombines into its neutral non-ionic state, but maintains its superheated properties [32].
The electrodes are separated by an electric insulator through which the working gas is
supplied. From now on the term "plasma torch" will be used to describe this type of
plasma torch.
17 |
Chalmers University of Technology | 2. Theory
The total power supplied to the plasma torch by the energy source is given by the product
of arc current and arc voltage according to Equation 2.4. A part of the total energy is lost
due to heating of the electrodes which are cooled by water in order to avoid overheating
and to reduce electrode consumption. The lost energy exits the torch with the cooling
water. Theratioofheatingpowertransferredtotheplasmaandthetorchpowerrepresents
the thermal efficiency of the plasma generation process, see Equation 2.5.
P = U I (2.4)
t a a
Q
p
η = (2.5)
th
P
t
The thermal efficiency varies a lot between different plasma torch manufacturers and
technologies. Most industrial plasma torches operate with an efficiency of 50 - 70 % but
an efficiency of over 90 % is possible [32].
2.4.4 Industrial plasma torches
Plasma torches have been used in the industry since the 1950s and the use continues to
increase every year [31]. The use of plasma torches in industrial processes was originally
motivated by the need of a heating source that could reach temperatures over 2000 °C
and could be used inside of a reactor [28].
Technologicaladvancementshasmadeplasmatorchesanidealsolutionformanychemical,
mechanical and metallurgical processes. The plasma can be used both as heat source and
a reagent in various industrial processes. A good example of a plasma torch application
in the industry is metal melting. Destruction of hazardous waste using plasma torches
has also become a big area of research.
The main advantages of plasma torches compared to other alternatives are the high tem-
perature in the plasma jet, the high energy density of the plasma and the possibility to
use different plasma gases depending on the desired application. Replacing fossil fuel
burners with plasma torches can also lead to lower operating costs and greenhouse gas
emissions. Other advantages include controlled process chemistry, small installation sizes
and rapid start-up and shut-down features. The use of electrical energy as a heating
source results in a decoupling of the heat source from the oxygen flow rate. This can be
useful in applications where a certain oxygen level is desired [33].
The operation of conventional oil burners produces large amounts of greenhouse gases due
to the combustion of fossil fuels. Increased concerns for climate impact makes solutions
that can contribute to the reduction of greenhouse gas emissions more interesting. With
increasing oil prices, there has also been a growing interest in technologies that can help
replacing expensive fuels with more economic alternatives such as electricity. This makes
plasmatorchesaninterestingoptionsincetheyuseelectricityastheprimaryenergysource
instead of fossil fuels.
18 |
Chalmers University of Technology | 2. Theory
The operating costs for a typical 2 MW fuel oil burner have been compared with a 2
MW air plasma torch by Pyrogenesis [33]. As can be seen in the table 2.1 below, there
is a significant potential for reduction of operating costs. A cost reduction of around
23% is achieved in this example. Cost reductions become even higher when considering
applications such as iron ore pelletization, cement kilns and metallic ore roasters since
these type of plants can have a large number of burners.
Table 2.1: Annual operating costs for a 2 MW fuel oil burner and air plasma torch.
Costs Fuel oil burner Plasma torch
Fuel oil cost ($ 0.5/l) $ 923 000 $ 0
Electricity cost ($ 0.03/kWh) $ 9000 $ 600 000
Replacement parts cost $ 0 $ 38 000
Total cost $ 932 000 $ 638 000
Replacing fossil fuels with electricity represents a great potential to reduce greenhouse gas
emissions. However, the degree of reduction depends on how the electricity is produced.
Electricity produced by renewable energy sources such as hydro, wind and solar power
have very low emissions, while electricity produced in coal or natural gas power plants
have much higher. A fuel oil burner emits around 115 kg CO eq per GJ of heat while
2
a plasma torch powered by electricity produced by hydropower only emits around 1 kg
CO eq per GJ. Retrofitting a 2 MW plasma burner in place of fuel oil burner leads to a
2
yearly reduction of over 7 000 tonnes of CO eq in this case. For a pelletizing plant with
2
a thermal input of around 40 MW, a reduction of over 140 000 tonnes would be possible.
2.4.5 Application of plasma torches for iron ore pelletization
The heat treatment process for iron ore pelletization in the straight-grate process requires
additional heating to reach the temperatures needed for the oxidation and sintering pro-
cesses. The recirculated air from the cooling zone is heated by several smaller burners
before reaching the firing zones (PH, F and AF). Direct exposure to the radiation from
the burner flames should be avoided since it can cause overheating of the top pellet layers.
Because of this, the burners are placed in separate enclosures which protects the bed from
most of the radiation. The burners are currently located in downcomer pipes that exits
on the sides of the tunnel-shaped firing chamber.
19 |
Chalmers University of Technology | 2. Theory
Under the assumption that the effect of radiation is negligible, implementing plasma
torches in place of fossil fuel burners should be relatively straightforward. The plasma
torches can be arranged in a similar way to the current oil burners. An example of a
plasma torch installation is shown in Figure 2.7. Recirculated air from the cooling zone is
mixed with extremely hot gas from the plasma torch before entering the firing chamber
where it oxidizes the pellets.
Figure 2.7: Vertical cross section of a firing chamber in the straight-grate process.
The main difference when introducing plasma torches is that there will be no combustion
reaction in the firing zones which changes the process gas composition. The combustion
reaction consumes oxygen and produces water vapor and carbon dioxide. Secondly, there
is an additional working gas flow used to produce the plasma. This will slightly increase
the flow rate of the process gas and may also change its composition. The changed
composition will affect the properties of the process gas and its ability to oxidize the
pellets. The process gases are also used to dry the pellets in the DDD zone, and drying
with a wet gas is not very efficient. If the process air is heated by electricity instead of
combustion, it will contain less water vapor as it enters the drying zone and can provide
a higher drying efficiency.
Finally,thetemperatureoftheplasmajetwillbesignificantlyhigherthanthetemperature
of the burner flame. Higher temperatures could potentially lead to undesired effects such
as production of thermal NO . This problem will be discussed further in the following
x
chapter. Another potential problem is hot gas zones caused by lack of mixing which leads
to thermal stresses and uneven oxidation of the pellets. The mixing process is important
in order to achieve a homogeneous process gas but will not be studied in this project.
20 |
Chalmers University of Technology | 2. Theory
2.5 Nitrogen oxides
Nitrogen oxides are molecules that are composed of nitrogen and oxygen atoms. The
nitrogen oxides that contribute to atmospheric pollution are nitric oxide (NO), nitrogen
dioxide (NO ) and nitrous oxide (N O). The term "NO " is used to describe NO and
2 2 x
NO , while N O is generally not included.
2 2
Anthropogenic activities are the main cause of increased levels of NO in the atmosphere.
x
Most of the NO is formed in combustion processes where nitrogen from the air or the
x
fuel reacts with oxygen and forms NO . A majority of the NO emissions in developed
x x
countries originate from the transport sector and the industrial sector. [34] The emissions
of NO is considered to be a global problem. NO is considered to be toxic and can
x x
cause lung diseases in humans. However, the main problems with NO are the secondary
x
effects, including acid deposition and formation of tropospheric ozone. [35]
Acid deposition can be in the form of wet deposition (acid rain) or dry deposition (gas and
particles). When released to the atmosphere, NO will react with water vapor to form
x
nitric acid (HNO ). This can be transported over long distances before being deposited
3
as acid rain, resulting in acidification of land and water. Acidification is harmful to
vegetation and wildlife and has caused severe environmental problems in many parts of
the world. [36]
Ozone (O ) can be formed through a reaction between an oxygen molecule and an oxygen
3
radical. The levels of oxygen radicals are increased by decomposition of NO by sunlight,
2
thereby increasing the formation of ozone. The ozone that is located close to the ground is
called tropospheric ozone. Ozone in the upper atmosphere is vital for protecting the earth
from harmful UV-radiation. However, tropospheric ozone is hazardous since it harms the
human respiratory system and causes damage to vegetation and crops. Ozone is also a
component of the so called "smog" that is a common problem in large cities. [37]
2.5.1 Formation of thermal NO
The formation of NO is dominated by NO at high temperatures but the emitted NO
x
usually converts to NO at lower temperatures. Therefore, the focus of this work will be
2
the production and reduction of NO rather than NO .
2
The formation of NO is a complex process and consists of many intermediate reactions
but it is common to divide the reactions into three main mechanisms; thermal, fuel and
prompt NO. Thermal NO is formed through the reaction between nitrogen and oxygen
from the air which normally occurs at very high temperatures. Fuel NO is formed by
oxidation of nitrogen from the fuel with oxygen from the air. Finally, prompt NO is
formed by the reaction between nitrogen from the air and hydrocarbon radicals from the
fuel.
21 |
Chalmers University of Technology | 2. Theory
Out of these three mechanisms, only thermal NO will be relevant to plasma torches since
the other two mechanisms requires combustion of fuel. The formation of thermal NO can
be described by the Zeldovich reactions [38], Reaction 2.6 - 2.8.
N +O ↔ NO+N (2.6)
2
N+O ↔ NO+O (2.7)
2
N+OH ↔ NO+H (2.8)
These reactions are only important at high temperatures since N contains a triple bond
2
that requires a large amount of energy to break. The rate of formation for thermal NO
usually becomes important relative to the other NO reactions at a temperature of over
1500 °C. The reaction rate is also determined by the concentrations of O , N and NO.
2 2
[35] The formation of NO is limited by reaction 2.6, where nitrogen molecules reacts
with oxygen radicals producing NO and nitrogen radicals. The nitrogen radicals will
then be consumed immediately by reaction 2.7, producing more NO and oxygen radicals.
The activation energy of reaction 2.6 is approximately 318 kJ/mol. The total amount
of thermal NO produced also depends on the gas residence time at high temperatures.
Strategies to reduce thermal NO will usually focus on reducing the O concentration and
2
removing temperature peaks.
2.5.2 NO emissions from iron ore pelletization
x
The iron ore industry is a substantial emitter of NO in Sweden. Iron ore pelletization
x
plants normally has a heat input of around 40 MW and will probably have to comply
with emission limits for medium to large size combustion plants. However, very limited
research has been carried out on NO mitigation measures for these type of plants since
x
the combustion conditions differ significantly from conventional combustion systems.
It is important to maintain high oxygen levels in the process gas to ensure a high degree of
oxidation. Therefore, a large volumetric flow of air is required in the firing zones. Relating
the air flow to the fuel flow, an air-to-fuel ratio of 4-6 is obtained. This is significantly
higher than in conventional combustion, where the air-to-fuel ratio is approximately 1.
Implementation of flue gas cleaning systems such as selective catalytic reduction (SCR)
is considered to be less efficient and more costly and the proportional cost for the envi-
ronmental benefit is still being discussed in the industry. NO mitigation is usually only
x
considered in cases where environmental regulations are otherwise not likely to be met.
There is therefore incentive to develop cost efficient measures to reduce NO emissions
x
from this type of plants. [35]
22 |
Chalmers University of Technology | 2. Theory
2.5.3 NO emissions from plasma torches
x
One of the major technical issues with the use of thermal plasma torches is the nitrogen
oxides that can be generated in the high temperature plasma which may limit the plasma
torch in its various applications. An electric arc operating in an oxidizing gas atmosphere,
in combination with high plasma temperatures, leads to formation of NO . The plasma
x
bulk temperature is normally 5000 - 6000 K, while the maximum temperature in the
plasma jet can be up to 10 000 K [39].
TheNO formationinplasmatorcheshasnotbeenstudiedextensivelysincemostresearch
x
has focused on NO emissions from conventional fuel combustion, where temperatures
x
usually are below 2000 °C. One study has been made on plasma torches applied to an
electric arc furnace [40]. These trials did not confirm any influence of the electric parame-
ters of the plasma torch on the formation of NO . The influence was not detectable due to
x
other more dominant parameters, including the composition of the furnace atmosphere.
Two common methods for industrial NO reduction are flue gas recirculation (FGR) and
x
staged combustion. FGR involves extracting flue gases and mixing it with combustion air
in order to lower the oxygen content of the combustion air as well lowering the combustion
temperature. Stagedcombustionisareductionstrategythatworksbyinjectingadditional
fuel in a secondary combustion zone. This creates a fuel rich reburning zone where
NO is destroyed through reactions with hydrocarbon radicals. These two reduction
x
strategies could probably also be applied to plasma torches with some modifications. For
example, a study by Uhm et al. [39] showed that NO generated by a plasma torch can
x
be disintegrated in a fuel-burning atmosphere with an exponential decrease in terms of
methane flow rate.
2.6 Process model
A process model has been developed during the previous Chalmers-LKAB collaboration
that can be used to simulate the straight-grate and grate-kiln processes. The model is
based around LKAB’s in-house bed simulation software BedSim. A Matlab framework
has been built around BedSim in order to perform additional calculations and make the
model more user friendly. The process model is a good tool for evaluating the effects of
modifying variables and changing the process configuration.
2.6.1 BedSim model
The BedSim model simulates the pellet bed throughout the process. The BedSim code
is confidential and can not be accessed in this project. Because of this, BedSim will be
used as a "black-box" model, where a given input generates an output, but no changes
can be made to the calculations inside the model. BedSim has been validated against
small scale lab tests and has been proven to give accurate results regarding the gas and
bed properties.
23 |
Chalmers University of Technology | 2. Theory
The model performs calculations for chemical reactions as well as mass and heat transfer
between the process gas and the bed. The calculation approach used in BedSim is to
divide the bed in a number of vertical elements. These elements moves through the
process with given time steps, where each time step represents a position along the grate.
The calculations are performed for every time step and the results are used as input for
thecalculationsinthenexttimestep. Theelementsarealsodividedinhorizontallayersin
order to account for changes in properties throughout the height of the bed. A number of
points in the hearth layer and the grate itself are also included. The calculation approach
in BedSim is presented in Figure 2.8.
The resolution in the time and height direction can be defined by the user depending on
the desired accuracy. However the total time for the process cannot be specified directly,
instead it will be calculated based on input parameters such as production rate, zone
areas and bed height. Running and evaluating data from BedSim is quite complicated
and time consuming. It can be hard to keep track of different cases when varying input
parameters and analyzing the results requires a lot of manual work.
Figure 2.8: Illustration of the calculation method used in the BedSim model.
24 |
Chalmers University of Technology | 2. Theory
2.6.2 Matlab framework
The Matlab framework has been developed in previous projects in order to make BedSim
more user friendly and simplify the analyzing work. Additional calculations are also
performed in the Matlab framework such as mass and energy balances and calculations of
energy consumption. The framework was originally developed for the grate-kiln process
but has since then been adapted for the straight-grate process.
The input data needed for BedSim is provided in an Excel sheet and can be classified into
general and zone-specific data. General data are constant parameters such as bed depth,
pellet radius and production rate. Zone-specific data are parameters such as gas flow
rate, area and temperature that changes between process zones. The output data from
BedSim is provided in a number of CSV files, where each row represents a time step and
each column represents a vertical position in the bed. The Matlab framework imports
this data and sorts it based on output variable, bed layer and process zone. The output
data is then visualized through a number of plots and certain important performance
indicators are presented to the user.
Another important feature of this framework is the ability to connect gas streams between
process zones. When two zones are connected, iterations are performed in order to achieve
the same temperature in the outlet of the first zone as in the inlet of the second zone.
This significantly increases the simulation time due to the increased complexity of the
system.
Global mass and energy balances are carried out in order to make sure that the model
handles the data correctly. The balances compares the predefined input values with the
simulated output values and returns the difference. The mass balance is solely based on
mass flows, while the energy balance is based on mass flows, temperatures and specific
heat of the different compounds. The energy generated or consumed by the chemical
reactions is calculated from the change of mass of the pellet bed and the heat of reaction.
25 |
Chalmers University of Technology | 3. Methodology
3.1 Reference case
The reference case (REF) was based on information provided by LKAB for the process
in the MK3 straight-grate unit. The input data used in the reference case have been used
by LKAB for an earlier pot-furnace campaign but is not available in the public version of
this report.
In the real MK3 unit described in Section 2.2.1, a large recuperation hood is used to lead
the flow from the C1 zone to the firing zones (PH, F and AF). The problem with this
configuration is the difficulty to control the amount and direction of air into the firing
zones. In order to be able to make alterations to the process, more detailed sectioning
was required where each cooling zone was connected to a specific firing zone.
The reference case therefore has 14 process zones instead of 8, where the F, AF and C2
zones were divided into two zones while the C1 zone was divided into five zones. The finer
sectioning makes it possible to match the temperature profiles between the cooling and
firing zones. An illustration of the connections between the process zones in the reference
case can be seen in Figure 3.2.
Figure 3.2: Schematic illustration of the connections between process zones in the ref-
erence case.
The model works by defining input data for the incoming flows and calculating output
data for the outgoing flows. The flow rates between connected zones were matched in the
input file. However, no process zones were directly connected in the simulation model,
since this significantly increased the complexity of the system as well as the simulation
time. Instead, the temperature differences between the flows from the C1 zone to the
firing zones were accounted for in the energy calculations. Here the energy required to
heat up the air to the desired temperature was calculated. This leaves the flows from C21
to UDD and AF to DDD. These were left unconnected but the results were monitored in
order to make sure that the temperature differences between outgoing and incoming flows
were not too large. The calculation approach used in the simulation model is visualized
in Figure 3.3.
27 |
Chalmers University of Technology | 3. Methodology
Figure 3.3: Illustration of how the calculations are performed in the simulation model.
The red streams are left unconnected.
3.1.1 Performance indicators
The performance of the reference case served as a baseline for comparisons between dif-
ferent process designs. The performance indicators selected in this study are presented in
Table 3.1.
Table 3.1: Selected performance indicators for the straight-grate process.
Indicator Description
F [%] Mean magnetite content at the end of the grate
mag,mean
F [%] Maximum magnetite content at the end of the grate
mag,max
T [°C] Mean temperature of bed at the end of the grate
bed,mean
T [°C] Maximum temperature of bed at the end of the grate
bed,max
Q [MW] Power requirement from fossil fuel sources
fossil
Q [MW] Power requirement from electricity
el
E [kWh/ton ] Total energy consumption per ton of pellets produced
tot p
The first parameters were the magnetite content, F , and the bed temperature, T ,
mag bed
attheendoftheprocess. Themeanandmaximumvalueoftheseparameterswereusedfor
assessingthepelletquality. Themagnetitecontentwasthemainindicatorofpelletquality
used in this study since other parameters such as pellet strength and deformation could
not be simulated in the model. The magnetite content represents the share of the pellets
that have not been oxidized to hematite, meaning that a low magnetite content at the end
of the process is desirable. The temperature of the finished pellets is also important since
a too high temperature can lead to problems with material handling equipment when the
pellets are prepared for transport. Higher temperatures also cause an increased amount
of dust from the pellets. It is generally positive to achieve a more homogeneous bed where
all of the pellets have a similar magnetite content and temperature and due to this a lower
maximum temperature and magnetite content was also desirable.
28 |
Chalmers University of Technology | 3. Methodology
3.2 Microwaves
The methodology of evaluating the potential for implementing microwave heating in the
straight-grate process was to use the reference case, described in Section 3.1, in the simu-
lation model and modify it to fit the characteristics of applying microwave heating in the
UDD zone. A microwave case study was constructed focusing on recreating the trends in
the moisture and temperature profiles of the pellets observed in the study performed by
Athayde et al. [25] in an attempt to anchor the model in experimental results. The results
from the study by Athayde et al. are presented in Section 2.3.3. It is important to note
that some of the parameters that might be affected by microwave implementation such
as thermal runaway, heating rate, crushing strength of the pellets and decreased pellet
deformation and clogging cannot be measured with the simulation model and therefore
these must be analyzed in practical tests.
The microwave heating was simulated via convective heat transfer in a downdraft flow
direction. In reality, the microwave heating and updraft drying would be in the same
zone, heating the bed from above and below simultaneously. In the simulation model
however, these two modes of heating had to be in separate zones due to their opposite
flow direction. Small microwave sections were placed evenly distributed across the length
of the UDD zone according to Figure 3.4, to approximate the behaviour of the two zones
being merged together. The simulation model could handle a maximum of 20 process
zones and since the reference case has 14 zones in total, there was room for three UDD
zones and four microwave zones. A larger amount of microwave zones would provide more
accurate results although the trends of using microwaves and UDD in combination were
still captured with this relatively small amount of microwave zones.
Figure 3.4: (a) Original UDD (b) UDD and microwave zone configuration.
The configuration of the microwave zones that most accurately correlated with the mois-
ture and temperature profile found in the study by Athayde et al. [25] was established
by a trial and error approach using the simulation model. The zone area, position, gas
flow and gas temperature in the microwave zones were changed until the model showed
results similar to the study.
30 |
Chalmers University of Technology | 3. Methodology
The microwave zones were constructed to provide a very rapid change in temperature and
moisture content of the upper layers. The rapid change was necessary for the lower layers
of the bed to remain unaffected through the microwave zone to as large extent as possible.
The performance of the microwave case study was then evaluated against the performance
indicators and process constraints described in Section 3.1.1 and 3.1.2 respectively.
The amount of evaporated water in the UDD section increased when implementing mi-
crowave heating compared to using only convective heating. It was therefore of interest
to investigate if the gas flowing through the UDD zone could transport the extra moisture
away. The simulation model accounts for moisture saturation in the gas but since the
UDD zones and microwave zones had to be separated when simulating the process, it
was necessary to calculate what moisture content the gas would have had when exiting
the UDD zone if the UDD zones and microwave zones were combined into one. This was
done by adding the moisture content of the gas leaving the microwave zones to the gas
leaving the UDD zones. Calculations were performed according to Equation A.4 and A.5
and results were manually compared to the maximum moisture carrying capacity of dry
air at different temperatures to evaluate if the modeling results overestimate the moisture
carrying capacity in the UDD/Micro zone.
The power requirement for microwave generation was estimated using the study by
Athayde et al. [25] as a baseline. Since the modeling of the microwave heating in the
simulation model was constructed to mimic the results of the pellet moisture and tem-
perature gradients from the study by Athayde et al. it was considered reasonable to
extrapolate the power requirement used in these experiments and apply it to the real
process. The power intensity, q , for the experiments performed by Athayde et al.
potgrate
was calculated according to Equation A.1 and was then used to calculate the power re-
quirement for the LKAB process according to Equation A.2. Losses were included in the
calculations but the magnitude of the losses in the microwave generation equipment and
to the surroundings was unknown and therefore the estimation of the power requirement
should be regarded as a rough estimation.
New process designs were developed with a trial and error approach with the focus of
improving the performance indicators with modifications enabled due to microwave im-
plementation while complying with the process constraints. Some general guidelines for
modifications to the drying zones were found while experimenting with the simulation
model. It was possible to increase the energy input in the UDD zone without exceeding
the process constraints. The DDD zone showed much less room for increased energy input
with the evaporation limit being the limiting constraint.
31 |
Chalmers University of Technology | 3. Methodology
3.2.1 New process design A
The new process design A was focused on exploring the possible benefits of lengthening
the UDD section while avoiding recondensation in the higher layers of the bed by imple-
menting microwaves. It has a larger UDD/Micro zone and a smaller DDD zone compared
to the reference case. The UDD size was increased to 29 m2 from its previous 21 m2 and
the DDD size was decreased to 16.5 m2 from its previous size of 31.5 m2. Resulting in a
7 m2 reduction in total grate area.
Due to the change in size of these zones, their respective gas flows were also modified as
can be seen in Figure 3.5, where the lines marked in red indicate modifications compared
to the reference case. The reasoning behind the configuration in Figure 3.5 was to re-use
as much as possible of the energy in the streams available to increase the energy input
per square meter of grate primarily in the UDD/Micro section but also a slight increase
in the DDD zone until the evaporation rate constraint limit is reached. The temperature
in the DDD zone is the same as the reference case but the mass flow of gas per square
meter of grate is slightly increased. The gas flow from C22 which was previously sent to
the stack is redirected to the UDD/Micro zone which increases the mass flow through the
UDD/Micro zone heavily and reduces the temperature since it contains lower grade heat.
Figure 3.5: Illustration of connections between process zones. Gas flows which are
modified compared to the reference case are marked in red.
3.2.2 New process design B
The new process design B was an extension of design alternative A. It was possible to
reduce the magnetite content of the pellets by increasing the size of the first cooling zone,
C11. The size of C11 was increased by 7 m2 compared to design alternative A, making the
total area of all zones equal to the reference case. The increased size of C11 results in a
decrease in the gas flow per square meter of grate. Other than that, all input parameters
are the same as the input parameters of design alternative A.
32 |
Chalmers University of Technology | 3. Methodology
3.3 Plasma torches
To study the effect of implementing plasma torches in place of fossil fuel burners in the
straight-grateprocess, aplasmatorchcasestudywasestablished. AsmentionedinSection
2.4.5, the main differences when introducing plasma torches is the lack of combustion
reaction and the addition of working gas flow into the process gas. An overview of the
process streams considered for the oil burner and the plasma torch is presented in Figure
3.6.
Figure 3.6: Schematic illustration of the process streams considered for an oil burner
(a) and a plasma torch (b).
To calculate the flow rate and composition of the process gas in the firing zones, certain
parametersfortheplasmatorcheshadtobespecified. Theseincludetorchpower, thermal
efficiency and working gas flow rate. The parameters used for the plasma case study are
based on a 2 MW industrial plasma torch from Westinghouse [32], presented in Table 3.3.
This type of torch was chosen since it has a similar power level to the oil burners currently
used in the MK3 unit.
Table 3.3: Parameters used for the plasma torch case study.
Parameter Value Description
P [MW] 2 Torch net electric power
t
U [V] 1000 Arc voltage
a
I [A] 2000 Arc current
a
η [%] 85 Thermal efficiency
th
V [Nm3/h] 250 Working gas flow rate
gas
33 |
Chalmers University of Technology | 3. Methodology
When a simulation of the reference case had been run, the heating power required for
each firing zone could be calculated from the gas mass flow and the enthalpy difference
over the zone. The number of plasma torches and the corresponding working gas flow
could then be found. By specifying the composition of the plasma working gas and the
air from the cooling zone, the new composition of the process gas could be calculated.
Using the increased flow rates and the new process gas compositions as input data, a new
simulation could be run to simulate the operation of the process with plasma torches.
An initial comparison was made between the reference case and a case with plasma torch
operation. Simulations were then performed with different working gases and flow rates
to analyze how these parameters affected the process. The BedSim model can only take
N , O , H O and CO as input for the process gas. Therefore it was not possible to test
2 2 2 2
common working gases such as argon and helium. The working gases that were used in
the simulations were air, N , O and CO . The base case had a working gas flow of 250
2 2 2
Nm3/h per plasma torch, but higher flow rates were also used.
3.4 Nitrogen oxide formation
The quantities of NO produced from a plasma torch can be qualitatively estimated
x
by reaction modelling in Chemkin based on established gas phase reaction kinetics. A
common type of reactor model used for combustion modelling is the one-dimensional plug
flow reactor (PFR). In the PFR a temperature profile can be defined and the reactions
are calculated at different distances from the reactor inlet. This can be used to obtain
a detailed analysis of the chemistry at different positions in the reactor. Even though
no combustion occurs in a plasma torch, the same calculation approach should still be
applicable.
Theideawastostartwithasimplifiedmodelandtograduallymakeitmorecomplex. This
gave an improved understanding of the underlying mechanisms and chemical reactions
that contribute to the formation of NO . Since many parameters of the torches are
x
unknown, the study is of a qualitative nature instead of a quantitative, where the focus is
to find general trends rather than definite values of NO emission levels. The results can
x
then be used as guidelines for future implementation of plasma torches when choosing
operating conditions and torch type for a certain application.
3.4.1 Reactor models
In order to study the production of NO in a plasma torch applied to the straight-grate
process, a PFR was used to represent the heating in the plasma gas channel and the
subsequent cooling of the plasma gas when it mixes with air. The two reactor models
that were used are illustrated in Figure 3.7. Since the exact dimensions, gas flows and
temperatureprofilesofthetorcheswerenotavailable, reasonablevalueswerechosenbased
on previous knowledge from the literature study. Due to this, most of the parameters were
subject to sensitivity analysis in order to study their effect on the results.
34 |
Chalmers University of Technology | 3. Methodology
Figure 3.7: Reactor dimensions for the NoMix case (a) and for the MixTP and MixEE
cases (b).
The reactor in (a) is a simplified model with a constant cross-section and no mixing. This
model was used to study the amount of NO produced directly when the electric arc heats
the plasma gas in the no-mixing case (NoMix). The plasma zone is the part of the reactor
located inside the torch, where the plasma is heated up to the maximum temperature
and maintains this temperature until the start of the cooling zone. The temperature then
decreases linearly to the outlet temperature. The inlet temperature is 25 °C, the plasma
temperature is 5000 °C and the outlet temperature is 1000 °C. The outlet temperature
was chosen based on the temperature of the firing zones in the straight-grate process.
The reactor in (b) is a more complex model where mixing with air is considered. After
the plasma zone, the reactor diameter increases linearly from 5 to 20 cm, at the same
time as the mass flow increases linearly through successive mixing with air. This can be
interpreted as a reaction zone extending from the outlet of the plasma torch, where no
air is mixed with the plasma, towards the complete flow at the outlet. This model was
used to approximate the amount of NO produced when the air is heated up by the plasma
gas. Therefore, the working gas used in the plasma torch for these simulations was argon.
Argon is an inert gas meaning that no NO will be produced in the plasma zone. In reality,
the area of the outlet depends on the amount of mixing air, which affects the residence
time of the gas in the reactor. However, this dependency was not considered in order to
simplify the model.
35 |
Chalmers University of Technology | 3. Methodology
Two versions of the mixing case were simulated, the mixing case with a defined tem-
perature profile (MixTP) and the mixing case where the energy equation was solved to
calculate the temperature profile (MixEE). In the MixTP case, the temperature profile
was defined using the same temperature profile as the NoMix case. In the MixEE case,
the inlet plasma and air temperatures were defined to 5000 °C and 1000 °C respectively
and the resulting temperature profile was calculated by the program. The reactor was
assumed to be adiabatic, meaning no heat flux out of the system. In both mixing cases,
the plasma gas flow was 10 g/s and the total air flow was 100 g/s. The total residence
time of the gas in the reactor was between 15-30 ms.
3.4.2 Validation and sensitivity analysis
Inordertoassessthereliabilityoftheresults,thesimulationmodelswerevalidatedagainst
experimental results from tests performed by Cementa [41]. In the experiments, a 250
kW plasma generator from ScanArc was used and the gas analysis was made with a Testo
350. NO and O were detected with an electrochemical cell and the CO was measured
x 2 2
with IR-technique. The amount of NO was measured using CO as working gas with
x 2
different amounts of leakage air in the gas. Tests were made with 3, 6, 9 and 100 % air
in the working gas. The results of the experiments showed that the NO concentration
x
was rather proportional to the amount of air in the gas, and only air as working gas gave
1856 ppm NO .
x
The parameters chosen for variation in the sensitivity analysis were plasma gas tempera-
ture, mixing zone length, air to plasma ratio and plasma gas flow rate. These parameters
were chosen since they were believed to have the largest influence on NO formation. For
each case, all of the other parameters were held constant and only the studied parameter
was varied. The upper and lower limits were determined by either reaching a very low
NO concentration or problems with the model to reach convergence.
Two methods for NO reduction were studied through simulations, reburning (staged com-
bustion) and reduction of the oxygen content of the mixing air through combustion. In
the reburning simulations, different amounts of methane (CH ) was added into the mix-
4
ing zone at a distance of 10 cm from the end of the plasma zone in order to destroy NO
through reactions with hydrocarbon radicals. In the oxygen reduction simulations, the
composition of the mixing air was changed assuming complete combustion of methane
before mixing with the plasma gas. These simulations were performed with a defined
temperature profile according to the MixEE case. However, in practice, the additional
combustion would cause a temperature increase, which might affect the NO reduction.
The methane flow rate was varied between 0-5 g/s for both reduction strategies. The
maximum amount of methane that could be combusted was around 5.8 g/s based on the
amount of oxygen available.
36 |
Chalmers University of Technology | 4
Results and discussion
In this chapter, the results from the case studies are presented and analyzed. To begin
with, the performance of the reference case is presented which serves as a baseline for
comparisons between process designs. In the following sections, the results for the plasma,
microwave and NO simulations are presented.
x
4.1 Reference case
One of the most important parameters is the pellet temperature since it affects the rate of
oxidation and sintering. The pellet temperature for the reference case is shown in Figure
4.1. There is a major temperature difference between the top and bottom layers of the
bed. For example, in the beginning of the PH zone, the top layer has reached over 1000
°C while the temperature in the bottom layer is almost unchanged. The reason for this
is that the downdraft flow heats the upper layers first which also causes the oxidation to
start there first.
In order for all the magnetite to oxidize, the pellet temperature needs to stay within a
certain temperature interval for a sufficient amount of time. A higher temperature is
generally better, but at over 1200 °C the reaction will slow down. This is the reason for
the bump in the line for the bottom layer in Figure 4.2. In the end of the AF2 zone, the
temperature becomes too high and the oxidation almost stops. But when the bed enters
the cooling zone the temperature is reduced and the oxidation starts again for some time.
Figure 4.3 shows the gas temperature for the reference case. The inlet process gas tem-
perature at the beginning and end of a zone is defined in the input file. The gas will then
be cooled or heated by heat transfer with the bed. When the gas is supplied from above,
the temperature will not change before reaching the bed. However, if the gas is supplied
from below it will be heated up by the grate and the hearth layer before reaching the
pellets.
37 |
Chalmers University of Technology | 4. Results and discussion
Figure 4.6: Temperature in the top layer of the grate throughout the process for the
reference case.
4.2 Microwaves
This section presents the results from process simulations with microwaves, starting with
the microwave case study and later presenting results from new process designs.
4.2.1 Microwave case study
The microwave case study has the addition of the microwave energy input in the UDD
zone. Table 4.3 shows the measured values of pellet temperature and moisture content
presented in the study performed by Athayde et al. [25] and Table 4.4 shows the values of
pellet temperature and moisture content at the end of the UDD section for the reference
case and the microwave case study from the process model. Measurements by Athayde et
al. recorded a larger temperature difference between their reference case and microwave
assisted case. The process gas used in these experiments was warmer which provides
less of a cooling effect, leading to lower thermal gradients and less driving force for heat
transfer between pellets and process gas resulting in higher pellet temperatures. It is also
possible that the process model underestimates the temperature increase of the pellets
affected by microwaves. However, the trends are similar and the values are reasonable in
comparison which enforces the credibility of the results from the microwave case study.
42 |
Chalmers University of Technology | 4. Results and discussion
Figure 4.9 and 4.10 shows the temperature profile of the pellets in the drying zones of the
reference case and the microwave case study. The zone transitions between the microwave
zonesandtheUDDzonesaremoreprominentinthetemperatureprofilethanthemoisture
content as can be seen by comparing Figure 4.10 and 4.8. The pellet temperature reduces
to almost the same level as the reference case before spiking again when switching to
microwave mode. It is believed that the process model underestimates the temperature
profile seen as an average across the whole UDD/Micro zone due to UDD being the
dominating operational mode.
Figure 4.9: Reference case temperature Figure 4.10: Microwave case temperature
profile. profile.
Calculations of the moisture content of the gas exiting the UDD/Micro section was per-
formed according to Equation A.4 and A.5. Results showed that the gas is slightly over-
saturated when exiting, implying that the process model overestimates the amount of
moisture that can be transported from the UDD/Micro zone. However, the maximum
moisture carrying capacity of air increases exponentially with temperature which implies
that a small change in temperature has a large impact on the moisture carrying capacity
of the gas. As previously mentioned, it is believed that the temperature profile of the
pellets is underestimated by the process model in the microwave case study. Assuming
that the gas is heated to a temperature close to that of the pellets when passing through
would indicate that the ability of the gas to transport the moisture away is sufficient.
Practical tests are necessary to further investigate this matter. However, it is a problem
that can be managed by increasing the temperature of the gas exiting the UDD/Micro
zone, increasing the mass flow of gas or a combination of the two.
The electric power requirement, Q , for microwave generation was calculated according
el
to Equation A.2 and was found to be 5.36 MW for the microwave case study which
has an UDD area of 21 m2. This is a considerable amount of power in comparison to
the power requirement for the whole process. Losses are included when calculating the
power requirement for microwaves while there are losses for the oil burners which are
not captured in the process model. It is known that LKAB uses 5 liters of oil per ton
finished pellets in normal operation of the MK3 straight-grate unit which is the base for
the reference case [14].
44 |
Chalmers University of Technology | 4. Results and discussion
This amounts to a power requirement of 24.34 MW while the total power demand for
the reference case calculated by the process model only amounts to 15.06 MW which is a
considerabledifference. Thisshouldbekeptinmindwhenassessingthepowerrequirement
of the microwaves.
The performance indicators of the microwave case study are presented in Table 4.5, next
to the performance indicators of the reference case. The power requirement is the most
noticeable difference between the two cases while the rest of the performance indicators
show only marginal changes. The power supplied from the microwaves, Q , has almost no
el
impact on reducing the fossil power demand, Q . This is likely due to a lot of the heat
fossil
supplied by the microwaves being absorbed by the gas flow leaving the UDD zone. It is a
considerable energy loss source. The temperature of the gas flow leaving the UDD zone
would likely be around, or even below, 100 °C assuming that it reaches approximately the
same temperature as the pellets in the top layer of the bed. This is low-grade heat which
has limited application areas.
Table 4.5: Performance indicators for REF and microwave case study.
Indicator REF Micro
F [%] 0.218 0.206
mag,mean
F [%] 2.857 2.756
mag,max
T [°C] 49.98 50.62
bed,mean
T [°C] 177.83 179.56
bed,max
Q [MW] 15.06 15.00
fossil
Q [MW] 0 5.36
el
E [kWh/ton ] 33.46 45.24
tot p
The values of the constraint parameters for the microwave case study lies well within the
limitsascanbeseeninTable4.6. Themostnoticeablechangeisthedecreaseinmaximum
evaporation rate. This maximum occurs in the higher layers of the bed at the start of the
DDD section which is the same as for the reference case. Less moisture content in the
pellets at this point seems to help reducing the maximum evaporation rate.
Table 4.6: Process constraints for the microwave case study.
Constraint REF Micro Limit
Δp [Pa] 7260 7265 8000
bed
T [°C] 348.1 349.6 400
fan
T [°C] 480.8 488.8 500
grate
r [kg/m3/s] 2.329 2.27 2.5
evap
The results indicate that microwave implementation is ineffective without further design
changes of the process since the total power requirement increases substantially with
almost no reduction in fossil fuel demand while the remaining performance indicators are
virtually unaffected.
45 |
Chalmers University of Technology | 4. Results and discussion
4.2.2 New process designs
The results for the new process designs A and B are presented side by side in this section
since there are many similarities in the configuration of the two cases, as described in
Section 3.2.1 and 3.2.2.
The drying sections of process design A and B are the same meaning that Figure 4.11
and 4.12 applies for both cases. Figure 4.11 shows the moisture content of the pellets
and Figure 4.12 shows the pellet temperature in the drying section of the process. When
comparing these to the reference case in Figure 4.7 and 4.9, it can be seen that the
increased energy input in the UDD/Micro section results in a quicker drying process and
a higher temperature in the lower layers of the bed.
Figure 4.11: New process design Figure 4.12: New process design pellet
moisture content. temperature.
The microwave energy input was kept constant even when changing the size of the UDD
section in the new process designs A and B. The new process designs has an UDD area
which is larger than the microwave case study. This leads to a lower power intensity
from the microwaves in the UDD zone of the new process designs leading to the moisture
and temperature profiles being conservative compared to the microwave case study. This
choice was made since it was difficult to extrapolate the effect that the microwaves had
on the moisture and temperature profiles and therefore being on the conservative side
makes it possible to assume that cracking of pellets due to rapid moisture evaporation
rates does not occur as a result of microwave heating since it was not observed in the
study by Athayde et al. [25] where higher power intensity was used.
Performance parameters for design alternatives A and B are presented in Table 4.7. The
total grate area, A , differs between the designs and therefore it was added as an
grate
additional performance indicator. Design alternative A shows pellet outlet temperatures
and magnetite content similar to the reference case with lower total grate area and higher
totalpowerdemand. Thefossilpowerdemand, Q , isdecreasedby0.51MWcompared
fossil
to the reference case which is small in comparison to the increased power demand for
electricity, Q .
el
46 |
Chalmers University of Technology | 4. Results and discussion
Design alternative B shows the lowest magnetite content out of all design alternatives.
Thisisduetotheaddedgrateareainthefirstcoolingzone, C11, whichresultsinaslightly
slower cooling process and thereby a longer residence time in the temperature interval
where magnetite conversion occurs. The fossil power demand is decreased by 0.98 MW
compared to the reference case which is again, small in comparison to the increased power
demand from electricity. It is clear that microwaves can replace a small amount of the
high-temperature energy in the process by supplying heat in the low-temperature range
which is otherwise covered by the high-temperature energy flows which must be added
to the process for magnetite conversion and the sintering process. Implementation of
microwaves decrease the energy efficiency of the process although it does provide potential
for CO reduction depending on how the electricity is produced. A financial analysis is
2
needed evaluating if there is sufficient financial gain in reducing the grate area and fossil
power demand to make design alternative A and B economically viable.
Table 4.7: Performance indicators for the new process designs.
Indicator REF Design A Design B
F [%] 0.218 0.208 0.062
mag,mean
F [%] 2.857 2.679 0.92
mag,max
T [°C] 49.98 50.83 49.65
bed,mean
T [°C] 177.83 180.05 177.43
bed,max
Q [MW] 15.06 14.55 14.08
fossil
Q [MW] 0 5.36 5.36
el
E [kWh/ton ] 33.46 44.24 43.21
tot p
A [m2] 315 308 315
grate
Table4.8showstheprocessconstraintvalueswhicharewithinthelimits. Theevaporation
rate, r , is at the maximum limit. This occurs in the top layer of the bed in the
evap
beginning of the DDD zone.
Table 4.8: Process constraints for the new process designs.
Constraint REF Design A Design B Limit
Δp [Pa] 7260 7485 7485 8000
bed
T [°C] 348.1 285.43 282.98 400
fan
T [°C] 480.8 491.24 491.16 500
grate
r [kg/m3/s] 2.329 2.5 2.5 2.5
evap
47 |
Chalmers University of Technology | 4. Results and discussion
4.3 Plasma torches
This section describes the results from the simulations of plasma torches applied to the
straight-grate process. A plasma torch case study is established which is used to analyze
theeffectofdifferentplasmaworkinggasesandworkinggasflows. Finally,anoptimization
is made which proposes two methods for improving the energy efficiency of the process
after implementation of plasma torches.
4.3.1 Plasma torch case study
The calculated parameters for the plasma torch case study include total heating power
requirement for the firing zones, the number of plasma torches and flow rate increase,
presented in Table 4.9. The number of torches is only used to estimate the total working
gas flow rate.
Table 4.9: Calculated parameters for the plasma torch case study.
Parameter Value
Total power required [MW] 15.06
Number of plasma torches 8.86
Total air flow [Nm3/s] 119.39
Total working gas flow [Nm3/s] 0.62
Flow rate increase [%] 0.52
For the plasma torch case study, a working gas of air is used with a flow rate of 250 Nm3/h
per torch. This means that the flow rate of the process gas increases but the composition
is not affected. A flow rate of 250 Nm3/h is relatively low in comparison to the air flows
in the firing zones, as can be seen in Table 4.9. This means that the introduction of
working gas into the process gas only has a marginal effect that is unlikely to affect the
performance of the process as a whole to a significant degree.
The performance of the plasma torch case is compared with the reference case in Figure
4.13 and 4.14. Some temperature deviations can be seen in the firing zones, however, the
curves are more or less identical. The magnetite mass fraction decreases slightly faster
for the plasma torch case compared to the reference case. The final magnetite content in
the bottom layer is 2.50 % as can be seen in Table 4.10, which is 0.35 % lower than for
the reference case. This indicates a small performance improvement, which can mostly
be explained by the increased amount of oxygen in the process gas due to the lack of
combustion. When oxygen working gas is used instead of air, the magnetite content
becomes 2.41 %, which is a further improvement of 0.09 %.
48 |
Chalmers University of Technology | 4. Results and discussion
Table 4.10: Performance indicators for the plasma torch case study.
Indicator REF Plasma
F [%] 0.218 0.173
mag,mean
F [%] 2.857 2.504
mag,max
T [°C] 49.98 50.53
bed,mean
T [°C] 177.83 179.10
bed,max
Q [MW] 15.06 0
fossil
Q [MW] 0 14.40
el
E [kWh/ton ] 33.46 32.00
tot p
In order to achieve significant changes to the process conditions, higher working gas flow
rates are required. To study the effects of different working gases the flow rates are
increased by a factor of ten to 2500 Nm3/h per torch, keeping the same torch power.
This results in a process gas flow rate increase of around 5.2 %. Cases with air, N , O
2 2
and CO as working gas are simulated. It is found that all of the gases except O gives
2 2
similar results. Therefore, only air and O are presented in Figure 4.15 and 4.16. As
2
can be seen in Table 4.11, the maximum magnetite content at the end of the process is
2.28 % for air but decreased even further to 0.60 % for oxygen. This indicates that using
pure oxygen as working gas in the plasma torches is a good choice since it increases the
oxidation rate and improves the quality of the pellets. However, increasing the oxygen
content in the process gas also leads to a higher temperature of the finished pellets and
a higher grate temperature. The maximum grate temperature is 543 °C which is slightly
above the constraint.
Figure 4.15: Pellet temperature throughout the process for the plasma torch cases with
increased flow rates.
50 |
Chalmers University of Technology | 4. Results and discussion
4.3.2 Optimization of plasma torch case
The first strategy for optimization of the plasma torch case is to lower the mass flow of
gas in the C15-PH zones. The PH zone has the largest energy requirement and thus a
large potential for improvement in terms of energy efficiency. This results in lower energy
consumption but also a lower magnetite conversion. The mass flow was lowered by around
11 % until F reached the same value as the reference case. This resulted in a
mag,mean
decreased energy consumption of 18 % compared to the reference case with the same
pellet quality.
The process can also be optimized with respect to the production rate. An increased
production rate leads to a faster process with less time in each zone and thus lower mag-
netite conversion. In a similar way, the production rate is increased until the magnetite
content is identical to the reference case. The production rate could then be increased by
around 3 % before reaching the same magnetite content. This also gives a similar energy
consumption as the first optimization strategy, however, an increase in the maximum
bed temperature is observed. With a faster process, the bottom layer is exposed to high
temperatures during a shorter time period, which might affect the sintering process nega-
tively. The optimized versions of the plasma torch case are presented in Table 4.12, where
Opt1 is the case with decreased C15-PH mass flow and Opt2 is the case with increased
production rate.
Table 4.12: Performance indicators for the optimization of the plasma torch case.
Indicator REF Opt1 Opt2
F [%] 0.218 0.218 0.218
mag,mean
F [%] 2.857 2.961 2.972
mag,max
T [°C] 49.98 55.05 56.12
bed,mean
T [°C] 177.83 179.10 201.91
bed,max
Q [MW] 15.06 0 0
fossil
Q [MW] 0 12.32 12.33
el
E [kWh/ton ] 33.46 27.39 26.61
tot p
p [ton /h] 450 450 463.5
rate p
52 |
Chalmers University of Technology | 4. Results and discussion
4.4.4 Model validation
In order to assess the reliability of the results, the simulation models are validated against
experimental tests and the results are presented in Table 4.13. The NoMix case shows
higherNOconcentrationformoreair,however,theNOconcentrationis10-20timeshigher
than in the experiments. For the MixTP case, the working gas composition has barely
any effect on NO concentration. Some NO is produced in the plasma zone but an absolute
majority of the NO is produced in the mixing zone. However, for the MixEE case the
NO production in the plasma zone dominates and determines the final NO concentration.
Out of the three simulation models, the MixEE case matched best with experiments.
Since the amount of mixing air in the experiments is not known, this value can be varied
in order to match the experimental results. The adapted mixing case (MixAdapt) is the
MixEE case where the amount of mixing air was increased from 100 g/s to 215 g/s, giving
an air-to-plasma ratio of 21.5. The MixAdapt case gives results that are very similar to
the experiments, where only the concentration for 100% air is off by around 300 ppm.
This means that the experimental results can be reproduced using the simulation model
that has been developed in this project which is a good indication that the model is
relatively accurate and that the results are trustworthy.
Table 4.13: Comparison of NO concentration [ppm] between the experiments and the
simulation models with different amounts of air in CO working gas.
2
Air [%] Experiments NoMix MixTP MixEE MixAdapt
3 303 7058 17222 857 299
6 402 9961 17227 1066 404
9 482 11612 17231 1237 489
100 1856 21639 17464 5149 2243
4.4.5 Sensitivity analysis
The sensitivity analysis is performed for the MixEE case since this shows the most resem-
blance to experiments. According to Figure 4.23, increased plasma temperature results in
a higher NO volume fraction. If the plasma temperature is 3000 °C, there is practically
no residence time at temperatures above 2000 °C, resulting in a low NO concentration.
Significant amounts of NO are produced when the plasma temperature reaches over 4000
°C and the concentration continues to increase with the temperature above this. Since
high temperatures are one of the most important properties of a plasma torch, it may not
be possible to alter the temperature in order to lower the NO formation. As can be seen
in Figure 4.24, the mixing zone length also affects the NO formation but the effect is not
as large. Mixing the same amount of air in a longer zone gives a longer residence time
and the temperature drops slower. The NO concentration increases with a longer mixing
zone but seems to reach a maximum value of around 1500 ppm. This indicates that it is
important to achieve good mixing between the plasma gas and the air in order to keep
the mixing zone short and the NO concentration low.
57 |
Chalmers University of Technology | 4. Results and discussion
Figure 4.23: Effect of plasma temp- Figure 4.24: Effect of mixing zone length
erature on NO concentration. on NO concentration.
The parameter that has the largest impact on the NO concentration is the air-to-plasma
ratio(theamountofmixingairinrelationtoplasmagas). Theeffectofairtoplasmaratio
on NO concentration and NO formation can be seen in Figure 4.25 and 4.26. At an air
to plasma ratio of 1, the NO concentration reaches over 10 000 ppm (1 %). When more
cooling air is introduced into the reactor, the velocity of the gas increases resulting in
shorter residence time and less time at high temperatures. Another effect of introducing
more air is that the overall temperature level in the reactor is decreased, which has the
same effect. Since the total flow rate is changed, the NO concentration does not provide
informationaboutthetotalamountofNOproduced. Therefore, asecondplotisproduced
where the mass flow of NO is calculated by relating the mass fraction of NO to the total
mass flow. This plot shows a similar trend where less air gives more NO, however, the
relationship is more linear instead of exponential.
Figure 4.25: Effect of air to plasma ratio Figure 4.26: Effect of air to plasma ratio
on NO concentration. on NO formation.
58 |
Chalmers University of Technology | 4. Results and discussion
Finally, the plasma flow rate is varied while the air to plasma ratio is kept the same. The
effect of plasma gas flow on NO concentration and NO formation can be seen in Figure
4.27 and 4.28. The trends are similar to what can be seen for the air to plasma ratio.
However, at plasma flow rates lower than 10 g/s, the NO concentration increase but the
NO formation starts to decrease. The flow rate of 250 Nm3/h used in the plasma case
study corresponds to a plasma gas flow of around 85 g/s. According to Figure 4.27, the
NO volume fraction is then less than 50 ppm.
Figure 4.27: Effect of plasma gas flow Figure 4.28: Effect of plasma gas flow rate
rate on NO concentration. on NO formation.
4.4.6 Working gases and NO reduction
Simulations are performed to show the effect of the plasma working gas on the formation
of NO. The temperature profile is defined according to the results from the MixEE case
since the program sometimes automatically adjusted the temperature to a lower level.
The results from the working gas simulations are presented in Figure 4.29. The results
show that both more nitrogen and oxygen atoms in the working gas seem to increase the
amount of NO that is produced. Some of the differences can also be explained by the
difference in molar mass, for example, 10 g/s is a smaller molar flow for Ar due to the
larger molar mass compared to the other compounds. Hydrogen produces almost no NO
since the hydrogen atoms react with NO to form OH according to reaction 2.8. Other
than hydrogen, argon and carbon monoxide are the gases that produce the least amounts
of NO.
59 |
Chalmers University of Technology | 4. Results and discussion
Figure 4.29: NO mole fraction for common plasma working gases.
NO reduction is studied through simulations of reburning and oxygen reduction. The
results from these simulations are presented in Figure 4.30. The results show that reburn-
ing is the more effective out of the two strategies, achieving lower NO concentrations for
all methane flowrates. For low methane flow rates, oxygen reduction through combustion
even increases the NO concentration. The reason why oxygen reduction does not work as
expected is that the oxygen atoms are not removed from the gas, but instead used to form
H O molecules. At high temperatures, these water molecules will dissociate into oxygen
2
and hydrogen radicals. Since oxygen radicals are present, NO formation is expected.
In these simulations, a methane flow rate of around 3 g/s seems to be optimal for reburn-
ing. Adding more than 3 g/s does not have any significant effect on the NO concentration.
When methane is combusted, hydrocarbon radicals are formed that will react and destroy
NO. The combustion of methane produces its own nitrogen oxides, however, the concen-
trations are much lower than those from the plasma torch, resulting in a net reduction of
NO. A reduction of up to 65% is achieved in the simulations, indicating that reburning
could be an effective measure for reduction of nitrogen oxides from plasma torches.
Figure 4.30: Effect of methane flow rate for the two reduction strategies.
60 |
Chalmers University of Technology | 5
Conclusions
This project investigates the potential of replacing fossil fuel burners with electric heat-
ing in the straight-grate process by modifying a previously developed process model. The
evaluationisbasedonareferencecasewithperformanceindicatorsandprocessconstraints
provided by LKAB for the straight-grate unit MK3. The results show that implemen-
tation should be possible for both plasma torches and microwaves with mostly positive
effects on performance indicators and without running into process constraints. The only
performance indicator that requires attention is the NO emissions from plasma torches.
x
Results from simulations with microwave heating correlated well with previous experience
of microwave-assisted drying of iron ore pellets in lab scale, providing legitimacy to the
results. Implementing microwaves did not significantly affect the performance parameters
directly evaluated by the process model. Recondensation of water vapour in the bed could
beavoided, indicatinglessdeformationandcloggingofpelletsduetoover-wetting. Design
modifications utilizing the benefits of microwave heating allowed for a 7 m2 reduction of
the drying section while reducing the fossil power demand by almost 1 MW and reducing
the maximum magnetite content of pellets by almost 2 %. However, around 5 MW of
electricity is needed for microwave generation, decreasing the overall energy efficiency.
There is a limited emission reduction potential in this application which could be viable
if electricity is readily available at low cost with a low emission footprint.
Implementation of plasma torches may be an alternative for electrification of the straight-
grateprocessandcandecreasetheCO emissionsfromtheprocesssignificantly. Retrofitting
2
plasma torches in place of fossil fuel burners in a pelletizing plant with a 40 MW thermal
input has the potential to reduce CO emissions by up to 140 000 ton per year. It has
2
been concluded that the implementation of plasma torches in the straight-grate process
only has a small effect on process parameters and energy consumption. With plasma
torches, there will not be any combustion reaction and thus no combustion products in
the process gas. This leads to slightly higher oxygen content in the gas which results in
a higher oxidation rate of the pellets. There will also be an introduction of an external
plasma forming gas which increases the process gas flow and changes its composition.
However, this will only affect the process conditions significantly at very high flow rates.
With increased oxidation, the process can be optimized, for example towards decreased
energy consumption or increased production rate.
61 |
Chalmers University of Technology | 5. Conclusions
The only negatively affected performance indicator for the plasma torches is the NO
x
emissions. Reaction modelling in Chemkin was used to study the formation of nitrogen
oxides from plasma torches applied to the straight-grate process. A simplified reactor
model was developed with a plasma zone and a mixing zone. In conclusion, the results
show that significant amounts of NO is produced when hot plasma gas mixes with air.
The formation of NO is dominated by the thermal NO mechanism (Zeldovich reactions)
and the reaction rate increases exponentially with temperature. Validation using previous
experiments shows that the MixEE case gives results that most closely resembles reality.
If the flow of mixing air is increased by 115 %, the experimental results can be reproduced
with good accuracy. This model was therefore used in a sensitivity analysis and to study
the effects of different plasma gases and NO reduction strategies. The sensitivity analysis
showed that higher plasma temperature, longer mixing zone and lower flow rates generally
lead to increased NO formation. Reburning was the most promising out of the studied
reduction strategies, with NO reduction of up to 65 % in the simulations.
5.1 Recommendations for future work
The possibilities of utilizing the BedSim model to study the effects of implementing mi-
crowaves and electric plasma torches have been thoroughly investigated in this study.
Further work in this area should instead be experimental tests to prepare for practical
implementation. The focus should be on evaluating the data from the experiments and
comparing it to the modelling results. The experiments will also be important in order
to study parameters that can not be simulated in the bed model such as sintering, pellet
strength and gas mixing. This will hopefully lead to an increased understanding of the
process and proposals for concepts that are ready for full-scale testing.
Further investigations should be performed towards the implementation of microwaves
focusing on evaluating under which circumstances microwave implementation would be
viable from a financial and emission reduction point of view. An initial financial analysis
would indicate if the reduced emissions due to less fossil fuel use and improved product
quality can outweigh the increased total power demand of the process. If microwaves
prove to be financially viable, the next step would be to perform practical tests with
lab scale equipment using varying frequency and power intensity to further evaluate the
possibilities and limitations of microwave implementation.
The possibility of using microwaves in combination with downdraft drying should also
be investigated. Microwaves could possibly be used in combination with a gas flow of
low-grade heat in the DDD zone which would cool the higher layers of the bed to prevent
them from overheating and transport heat to the lower layers of the bed. This method
could potentially be more energy efficient than microwaves in combination with UDD
since the heat provided by the microwaves would be transported down through the bed
by the downdraft gas flow. The bed would then be able to retain the heat provided by
the microwaves through the process to a larger extent.
62 |
Chalmers University of Technology | 5. Conclusions
For plasma torches, there are some important details that can not be simulated with the
process model. One example is the mixing process of the plasma and the recirculated air.
Since the plasma is much hotter than a combustion flame it becomes more important for
the process gas to be homogeneous before reaching the pellet bed. This could otherwise
lead to uneven oxidation of the pellets. Due to this, the mixing process should be studied
further through fluid dynamics simulations or practical experiments. When finally imple-
menting plasma torches in the real process, it will be a good idea to begin by replacing
only one pair of burners. In this way, the investment cost can be spread out and changes
in pellet quality can be evaluated. If no major changes are observed, more burners can
be replaced until eventually the process operates with plasma torches only.
It is also important to remember that there are many uncertainties regarding the NO
simulations. First of all, reaction modelling might not be able to accurately describe the
behaviour of plasma. At extremely high temperatures the particles become ionized which
affects the chemical reactions. However, to which degree this would affect the results
compared to normal gas-phase reaction kinetics is still unclear and needs to be studied
further. Secondly, more data of different types of plasma torches including dimensions,
massflowsandplasmatemperatureswouldallowforsimulationsthatcanbeusedtobetter
predict the absolute amounts of NO formation. Due to these uncertainties, it would be a
good idea to measure the production of nitrogen oxides from plasma torches in a test rig
before implementation. The NO formation could then be measured for different operating
conditions to find out what works best in practice.
63 |
Chalmers University of Technology | 5. Conclusions
5.2 New process concept
As a final part of this project, a new process concept is proposed where both microwaves
and plasma torches are utilized to supply the energy to the process. When working on
the electrification of the process, it is important to have a holistic view of the entire steel
production chain. This concept is an integration between a straight-grate pelletization
plant and a direct-reduction plant. It is a futuristic vision of how new technologies could
be combined in order to produce CO -neutral steel. The new process concept is presented
2
in Figure 5.1.
Figure 5.1: Principal design of the new process concept for pelletization and direct-
reduction of iron ore.
The concept consists of a travelling grate with a drying and firing section but no cooling
section. Instead, the pellets are fed directly into the direct-reduction process where they
are reduced by a flow of hydrogen gas. Iron is reduced when it is heated to a temperature
of 800 - 1200 °C in the presence of a reducing gas, for example hydrogen gas. Since the
pellets already have a high temperature when they enter the direct-reduction process, less
heat has to be supplied in order to heat up the reducing gas resulting in lower energy
consumption. Waste heat from the pelletization process can also be used in the direct-
reduction to improve energy efficiency even further.
64 |
Chalmers University of Technology | 5. Conclusions
At the same site, electrolysis of water is performed in order to produce pure oxygen
and hydrogen gas. Electrolysis is a process where electricity is used to split up water
molecules into its components, oxygen and hydrogen. The oxygen gas can then be used
as plasma working gas while the hydrogen gas can be used for direct-reduction of the iron
ore pellets. This reduces the need to store leftover gas from the electrolysis, as it can be
pumped directly to the process. As described in the plasma case study, supplying extra
oxygen to the process gas will increase the magnetite conversion, resulting in increased
pellet quality.
Sincethereisnocoolingsection, itwillnotbepossibletorecirculateheatinthepelletizing
unit. All of the heat instead have to be supplied from plasma torches and microwaves.
MicrowavesareusedintheUDDzonetoreduceproblemsrelatedtoover-wettingofpellets
in the higher layers of the bed. In the DDD zone, microwaves are used in combination
with a low-temperature flow of air, preferably waste heat from some other part of the
process. The microwaves heat the higher layers of the bed while the downward flowing
air absorbs heat from the higher layers and transport the heat downwards into the bed.
For a process with the same production rate as the reference case, the total amount of
heat that has to be supplied is around 240 MW. This is a considerable increase compared
to the original straight-grate process and would require several large, high power torches.
In this concept, the heat requirement is practically moved from the direct-reduction to
the pelletization process but the net energy requirement of the overall process should be
similar. However, heat losses and leakage of air in the recirculation will no longer be
a problem. A straight-grate without cooling zones also leads to much lower grate area
resulting in a lower initial investment costs for the plant. A final advantage is that the
transportation between the pelletizing plant and the steel production plant is eliminated,
resulting in lower transportation costs and emissions.
ThisconceptmakesitispossibletoproducetotallyCO -neutralsteelifthedirect-reduced
2
ironisthenfinallyrefinedinanelectricarcfurnace. Therearehoweverstillmanyquestions
that have to be answered. What is the pellet strength in its hot state? Will the pellets be
able to keep their shape when being loaded into the direct-reduction unit? What amounts
of hydrogen and oxygen gas are required? Many questions still have to be answered before
this concept can become a reality.
65 |
Chalmers University of Technology | A
Process parameter calculations
A.1 Power requirement of microwaves
The study by Athayde et al [25] states that a constant power supply, Q , of 10
potgrate
kW was used for microwave generation. It is not clearly stated if the power requirement
was measured from the wall electricity socket or if it was the actual microwave output.
For these calculations it was assumed that the power requirement was measured from the
wall socket. The power intensity q was calculated according to Equation A.1. The
potgrate
pellet mass, m , and density, ρ , was 45 kg and 5260 kg/m3 respectively. The
pellets pellets
bed depth, l , was 0.37 m. The complementary data collected from the LKAB input
bed
data was the voidage factor of the bed, Φ, which was 0.41.
Q Q (cid:20)kW(cid:21)
potgrate potgrate
q = = = 255.17 (A.1)
potgrate A potgrate m pellets/l bed m2
(1–Φ)·ρ
pellets
The power intensity q together with the 21 m2 surface area of the UDD zone from
potgrate
the reference case was used to calculate the electric power requirement, Q , for microwave
el
generation for the real process according to Equation A.2. Q was then added to the
Micro
performance indicator Q , which represents the power requirement of the entire process.
net
h i
Q = q ·A = 5.36 MW (A.2)
el potgrate UDD
I |
Chalmers University of Technology | A. Process parameter calculations
A.2 Evaporation rate
The evaporation rate, r , was calculated using the output file O.Data_BGH2O. This
evap
file stores the values of the mass flow of steam through the bed per square meter, m˙
H O
2
[kg/m2/s], calculated by the BedSim file. The output file O.Data_BGH2O contains one
value for each vertical position in the bed in every horizontal position of the grate. The
values are accumulated in flow direction. The amount of steam released between two
vertical positions in the bed is therefore calculated according to Equation A.3, where y
[m] represents vertical position in the bed.
m˙ –m˙ (cid:20) kg (cid:21)
H O,i H O,i–1
r = 2 2 (A.3)
evap,i y –y m3 ·s
i i–1
Theevaporationratewascalculatedforeachverticalpositioninthebedineachhorizontal
position along the grate in the zones where moisture was still present. The maximum
evaporation rate in each zone was then acquired to determine if the it was within the
constraint limit.
A.3 Gas moisture saturation
The moisture content of the gas leaving the UDD/Micro section was calculated according
to Equation A.4 and A.5. Equation A.4 summarizes the moisture which is evaporated
in the microwave zones and expresses it as an average across the number of UDD zones.
The average amount of water evaporated in the microwave zones was then added to the
moisture content of the gas leaving the UDD sections, according to Equation A.5, in order
to calculate which moisture content that the gas would have if the UDD and microwave
heating was in the same zone.
P[X
H O,out
·m˙
gas,out
–X
H O,in
·m˙ gas,in]
micro
(cid:20)kg(cid:21)
m˙ = 2 2 (A.4)
H Oavg,micro
2 n s
UDDzones
m˙ +[X ·m˙ ] (cid:20) kg (cid:21)
H Oavg,micro H O,i gas,out,i UDD H O
m = 2 2 2 (A.5)
H O,i
2 m˙ ·(1–X ) kg
gas,out,i H O,i drygas
2
The calculated moisture content was then compared to the maximum moisture carrying
capacity of air at the temperature of the gas as it leaves the UDD zone.
II |
Chalmers University of Technology | A. Process parameter calculations
A.4 Pressure drop
The pressure drop across the bed in each zone was calculated using the output file
O.Data_BPDROP which stores one value for every vertical position in the bed in ev-
ery horizontal position along the grate. The value stored in each position is the pressure
difference between the actual pressure in the position and atmospheric pressure. The
average pressure drop over the bed height in a zone was calculated according to Equation
A.6, where Δp is the pressure drop across the bed and m˙
gas
is the mass flow through the
bed.
P tΔp(t)·m˙ gas(t) h i
Δp = Pa (A.6)
bed P m˙ (t)
t gas
A.5 Mean magnetite mass fraction
The mean magnetite mass fraction in the bed at the outlet of the process, F , was
mag,mean
calculated using the output file O.Data_BMAG. This file stores the magnetite content of
thepelletsforeveryverticalpositioninthebedineachhorizontalpositionalongthegrate.
The last row of this file represents the magnetite content in the outlet of the process.
Sincetheverticalpositionsarenotspacedequally, aweightedaveragehastobecalculated.
Firstthemagnetitemassfractionineachcelliscalculatedastheaverageofthetwoclosest
points according to Equation A.7. The mean mass fraction of magnetite in the bed is
then calculated through Equation A.8, where Δy is the height of cell i.
cell,i
F +F
"kg #
mag,i mag,i+1 mag
F = (A.7)
mag,cell,i
2 kg
p
P F ·Δy "kg #
i mag,cell,i cell,i mag
F = (A.8)
mag,mean P Δy kg
i cell,i p
A.6 Mean bed temperature
The mean temperature of the bed at the outlet of the process, T , was calculated
bed,mean
using the output file O.Data_BMTEMP. The last row of this file represents the temper-
ature of the pellets at the outlet of the process. In a similar way to the magnetite mass
fraction, a weighted average was calculated through Equation A.9 and A.10.
T +T
bed,i bed,i+1 h i
T = °C (A.9)
bed,cell,i
2
P iT
bed,cell,i
·Δy
cell,i h i
T = °C (A.10)
bed,mean P Δy
i cell,i
III |
Chalmers University of Technology | Cone Crusher Modelling and Simulation
Development of a virtual rock crushing environment based on the discrete
element method with industrial scale experiments for validation
JOHANNES QUIST
Department of Product and Production Development
Chalmers University of Technology
SUMMARY
Compressive crushing has been proven to be the most energy efficient way of mechanically
reducing the size of rock particles. Cone crushers utilize this mechanism and are the most
widely used type of crusher for secondary and tertiary crushing stages in both the aggregate and
mining industry. The cone crusher concept was developed in the early 20th century and the basic
layout of the machine has not changed dramatically since then. Efforts aimed at developing the
cone crusher concept further involve building expensive prototypes hence the changes made so
far are incremental by nature.
The objective of this master thesis is to develop a virtual environment for simulating cone
crusher performance. This will enable experiments and tests of virtual prototypes in early
product development phases. It will also actualize simulation and provide further understanding
of existing crushers in operation for e.g. optimization purposes.
The platform is based on the Discrete Element Method (DEM) which is a numerical technique
for simulating behaviour of particle systems. The rock breakage mechanics are modelled using a
bonded particle model (BPM) where spheres with a bi-modal size distribution are bonded
together in a cluster shaped according to 3D scanned rock geometry. The strength of these
virtual rock particles, denoted as meta-particles, has been calibrated using laboratory single
particle breakage tests.
Industrial scale experiments have been conducted at the Kållered aggregate quarry owned by
Jehander Sand & Grus AB. A Svedala H6000 crusher operating as a secondary crusher stage
has been tested at five different close side settings (CSS). A new data acquisition system has
been used for sampling the pressure and power draw sensor signals at 500 Hz. The crusher liner
geometries were 3D-scanned in order to retrieve the correct worn liner profile for the DEM
simulations.
Two DEM simulations have been performed where a quarter-section of the crusher is fed by a
batch of feed particles. The first one at CSS 34 mm did not perform a good enough quality for
comparison with experiments; hence a number of changes were made. The second simulation at
CSS 50 mm was more successful and the performance corresponds well comparing with
experiments in terms of pressure and throughput capacity.
A virtual crushing platform has been developed. The simulator has been calibrated and
validated by industrial scale experiments. Further work needs to be done in order to post-
process and calculate a product particle size distribution from the clusters still unbroken in the
discharge region. It is also recommended to further develop the modelling procedure in order to
reduce the simulation setup time.
Keywords: Cone Crusher, Discrete Element Method, Bonded Particle Model, Simulation, Numerical
Modelling, Comminution, Rock breakage
iii |
Chalmers University of Technology | ACKNOWLEDGEMENT
I would like to acknowledge and thank a number of people that have made this work possible
and worthwhile. First of all I appreciate all support and mentorship from my supervisor Magnus
Evertsson and co-supervisor Erik Hulthén. Much appreciation goes to Gauti Asbjörnsson and
Erik Åberg for helping out during the crusher experiments! Also, thank you Elisabeth Lee,
Robert Johansson and Josefin Berntsson for help, support and fruitful discussions at the office!
Thanks to Roctim AB for initiating and supporting the project and for providing 3D scanning
equipment! Special thanks to Erik and Kristoffer for helping with data collection on site!
I’m very grateful to Jehander Sand & Grus AB and all the operators for letting me conduct
experiments at the Kållered Quarry and use the site laboratory. Special thanks to Michael
Eriksson, Peter Martinsson and Niklas Osvaldsson!
Finally I would like to thank the engineering team at DEM-solutions Ltd for all the support,
help and discussions. Special thanks to Senthil Arumugam, Mark Cook and Stephen Cole!
Johannes Quist
Göteborg, 2012
v |
Chalmers University of Technology | NOMENCLATURE
BPM Bonded particle model
DEM Discrete element method
PBM Population balance model
PSD Particle size distribution
HPGR High pressure grinding roll
CSS Close side setting
OSS Open side setting
DAQ Data acquisition
CAD Computer aided design
CAE Computer aided engineering
CPUH CPU Processing Hours
HMCM Hertz Mindlin Contact Model
MDP Mechanical Design Parameter
OCP Operational Control Parameter
SOCP Semi-Operational Control Parameter
OOP Operational Output Parameter
SPB Single Particle Breakage
IPB Interparticle breakage
CRPS Chalmers Rock Processing Systems
Critical bond shear force ⃗ Normal force
Critical bond normal force ⃗ Tangential force
Shear bond stiffness ⃗ Normal damping force
Normal bond stiffness ⃗ Tangential damping force
J Bond beam moment of inertia Equivalent shear modulus
Radius sphere A Equivalent young’s modulus
Radius sphere B Equivalent radius
⃗ Contact resultant force Equivalent mass
̅ Normal unit vector Normal overlap
𝑡̅ Tangential unit vector Tangential overlap
Bond beam length Poisson ratio
Bond disc radius Coefficient of restitution
⃗ Bond resultant force Coefficient of static friction
Bond normal torque Normal stiffness
Bond shear torque Tangential stiffness
s Eccentric throw Damping coefficient
Eccentric speed ̇ DEM throughput capacity
Mantle angular slip speed ̇ DEM mass flow
Chamber cross-sectional area Sectioning factor
Mantle slope angle PSD scalping factor
Estimated particle tensile strength BPM bulk density
Critical force for fracture
Crushing force
Hydrostatic bearing area
vi |
Chalmers University of Technology | 1. INTRODUCTION
In this section the background and scope of the project is presented. The ambition is to give the
reader an understanding of why the project was initiated, how it has been set up and what the
boundaries are in terms of time, resources, limitations and deliverables.
1.1 Background
Rock crushers are used for breaking rock particles into smaller fragments. Rock materials of
different sizes, normally called aggregates, are used as building materials in a vast number of
products and applications in modern society. Infrastructure and building industries are heavily
dependent on rock material with specified characteristics as the basis for foundations, concrete
structures, roads and so on. Hence this gives a strong incentive to facilitate production of
aggregates at low cost, high quality and low environmental footprint. In the mining industry the
same argument applies, however here, the objective is to decimate the ore to the size at which
the minerals can be extracted. Crushers are usually a part of a larger system of size reduction
machines and so performance has to be considered not only on a component level but more
importantly on a systematic level. This means that the optimum size reduction process is not the
same for the mining and the aggregate industry.
Cone crushers are the most commonly used crusher type for secondary and tertiary crushing
stages in both the aggregate and the mining industry. Due to the vast number of active
operating crushers in the world, a very strong global common incentive is to maximize
performance and minimize energy consumption and liner wear rate. These goals are aimed both
towards creating more cost efficient production facilities, but also in order to satisfy the
aspiration for a more sustainable production in general. Historically the same type of crushers
have been used both for the aggregate and the mining industry, however this is about to change
and the crusher manufacturers now customize the design towards specific applications.
In order to be able to design and create more application specific cone crushers, optimized
towards specific conditions and constraints, better evaluation tools are needed in the design
process. Normally in modern product development efforts, a large number of concepts are
designed and evaluated over several iterations in order to find a suitable solution. These
concepts can either be evaluated using real prototypes of different kinds or virtual prototypes.
Physical prototypes of full scale crusher concepts are very expensive and the test procedures
cumbersome. This provides a strong incentive for using virtual prototypes during the evaluation
and design process. If a crusher manufacturer had the possibility of evaluating design changes or
new concepts before building physical prototypes, lead times and time to market could
potentially be dramatically shortened and the inherent risk coupled to development projects
would decrease.
The methods available for predicting rock crusher performance today are scarce and the
engineering methodology used for prediction has historically been empirical or mechanical
analytical modelling. These models have been possible to validate through experiments and
tests. However, much remains to be explored regarding how the rock particles travel through
the crushing chamber and which machine parameters influence the events the particles are
subjected to.
Compressive crushing is a very energy efficient way of crushing rock compared to many other
comminution devices. The dry processing sections in future mines will probably use crushers to
1 |
Chalmers University of Technology | reduce material to even finer size distributions than today in order to feed High Pressure
Grinding Roll (HPGR) circuits in an optimum way. Replacing inefficient energy intensive
tumbling milling operations with more effective operation based on crushers and HPGR circuits
will potentially result in an extensive decrease in energy usage in comminution circuits. In order
to enable this new type of process layout, cone crushers need further development in order to
crush at higher reduction ratios. It is in these development efforts that DEM simulations will
play a crucial role.
1.2 Objectives
The aim of this work is to develop a virtual environment and framework for modelling and
simulating cone crusher performance. The general idea is to perform experiments on an
industrial operating cone crusher and carefully measure material, machine and operating
conditions. The experimental conditions will act as input to the simulation model and finally the
output from experiments and simulations will be compared in order to draw conclusions
regarding the quality and performance of the simulation model. The crusher studied is located
in a quarry in Kållered owned by Sand & Grus AB Jehander (Heidelberg Cement). The crusher
is a Svedala H6000 crusher with a CXD chamber.
1.3 Research Questions
In order to give focus to the work a number of key research questions have been stated in the
initial phase of the project. The ambition is to provide answers with supporting evidence on the
following:
RQ1. How should a modelling and simulation environment be structured in order to
effectively evaluate performance and design of cone crushers?
RQ2. To what extent is it possible to use DEM for predicting crusher performance?
RQ3. How should a DEM simulation properly be calibrated in order to comply with
real behaviour?
RQ4. What level of accuracy can be reached by using DEM for simulation of existing
cone crushers?
RQ5. How does a change in close side setting influence the internal particle dynamics
and crushing operation in the crushing chamber?
1.4 Deliverables
Apart from the learning process and experience gained by the author during the project, the
project will result in a set of deliverables:
A simulation environment for modelling cone crusher performance
A BPM model incorporating heterogeneous rock behaviour, particle shape, size
distribution and variable strength criteria
A method for calibrating BPM models
New insight into how the rock particle dynamics inside the crushing chamber are
influenced by a change of the CSS parameter
New insight on how to validate DEM models by using laboratory and full scale
experiments
A high frequency DAQ system
Master thesis report
Final presentation
2 |
Chalmers University of Technology | 1.5 Delimitations
The scope of this project is relatively vast and hence it is important to consider not only what to
do, but also what the boundaries are. The following points aim towards limiting the scope of the
project:
One type, model and size of cone crusher will be studied
No iterations of the full scale DEM simulations will be performed in this thesis
The main focus of the Methods chapter will be on the methods developed in the project
and how they have been applied. Standardized test methods utilized during sample
processing, statistical methods and basic DEM theory will not be extensively reviewed.
No analytical cone crusher flow model will be used in the work
The numerical modelling will be done using the commercial software EDEM developed
by DEM Solutions ltd.
The work will only briefly cover aspects regarding rock mechanics and rock breakage
theory
1.6 Thesis Structure
The work in this thesis is based on a two parallel tracks of activities in the experimental and
numerical domain, see Figure 1. When using numerical modelling tools for investigating
machine performance or design it is necessary to also conduct experiments. The experiments
not only act as the basis for validation of simulations but also give the researcher fundamental
insight into the operation of the system being studied.
In order to be able to discuss and draw conclusions a brief theory section is included in the
thesis. It aims towards describing the fundamentals regarding the cone crusher, the discrete
element method and some theory regarding rock breakage. In the methods chapter, the focus is
put on presenting the methods developed in the project rather than describing and listing
standardized procedures utilized e.g. how to conduct sieving analysis. A separate chapter is
dedicated to the Material Model Development. Both the physical breakage experiments as well
as the DEM simulations performed in order to calibrate the material model are presented. This
chapter is followed by the Crusher Experiments and Crusher Simulation sections. In these
chapters the results from each separate activity are shown and briefly commented. The Result &
Discussion chapter is dedicated to a comparative study of the simulation and experimental
result. Finally Conclusions are drawn regarding the results and each research question stated
above is addressed.
3 |
Chalmers University of Technology | Literature Study
Experimental domain Numerical domain
Crusher DAQ DEM environment
system setup development
Single particle 3D-
BPM model
scanning and
calibration
compression
Full scale crusher Crusher DEM
experiments simulations
Sample Processing DEM data analysis
Data compilation and
processing
Report compilation
Figure 1 - Project approach with two parallel tracks of activities in the experimental and numerical domains
1.7 Problem analysis
In order to fulfil the stated goals a number of problematic obstacles need to be bridged. One of
the difficult issues to decide is how and what to measure in the cone crusher system. The
following experimental aspects need special consideration;
How to sample and measure the coarse feed size distribution as there are no
standardized laboratory methods or mechanical sieves that handle sizes above
~90mm?
How to measure pressure, power and CSS signals in a satisfactory manner as the
crusher control equipment has a too low sampling rate and is hence negatively
affected by the sampling theorem?
In order to be able to simulate the rock breakage in the crusher a breakage model needs to be
developed. The following breakage modelling aspects need special consideration;
What level of complexity is possible to incorporate in the model in terms of particle
shape, rock strength and heterogeneity?
How should the breakage model be calibrated?
What is the most suitable way of generating a particle population with breakable
particles of different sizes and shapes?
A number of issues and obstacles need to be addressed when it comes to the crushing
simulation especially concerning computational capacity. The following aspects regarding the
crushing simulations need special consideration;
Should the whole crusher be simulated or only a segment?
How many bonded-particles is it possible to incorporate?
How should the rock particles be introduced in the DEM model?
What crusher geometry should be used in the simulation? In the experiments the
liners are severely worn, hence nominal CAD geometry will not be a correct
equivalent to the experimental operating conditions.
4 |
Chalmers University of Technology | 2. THEORY
In this section the theoretical background will be presented for a number of areas of interest in the
thesis giving the reader an introduction and framework in order to follow the discussion and
analysis in upcoming chapters.
2.1 The Cone crusher
The current engineering process for developing crushing machines is based on minor
incremental changes to a basic fundamental mechanical concept. There are two main types of
cone crusher concepts available on the market, the so called Hydrocone and Symons type
crushers. The main differences lies in the choice of main shaft design and how to take care of
the loads and dynamics using different bearing design, see Figure 2. These design choices are
coupled to various advantages as well as negative limitations for both concepts.
The main shaft in the Hydrocone concept is supported in the top and bottom by plain bearings
and a hydraulic piston. The attractiveness of this solution is that the shaft vertical position can
be adjusted hydraulically. This enables online adjustment of the CSS for e.g. utilization in
control algorithms or compensating liner wear. Also, it is relatively easy to take care of the
tramp protection, i.e. foreign hard metal objects unintentionally placed in the crusher, by having
a hydraulic safety valve that quickly drops the main shaft before the crusher is seriously
damaged.
In the Symons concept the mantle position is fixed on top of a shorter main shaft with the plain
bearing on top. The CSS is varied by moving the top shell up and down instead of the mantle.
The top shell can only be turned when not loading the crusher. Hence, it is not possible to
adjust the CSS during operation. An advantage with the fixed-shaft design is that the pivot
point can be positioned at a vertical position above the crusher enabling a more parallel mantle
movement. The pivot point is governed by the radius of the plain thrust bearing.
The illustration in Figure 3 shows a horizontal cross-section of the mantle and concave
explaining the eccentric position of the mantle and how it is related to the gap settings (CSS),
throw and eccentric movement.
The engineering knowledge foundation is mainly built on empirical tests, field studies and
analytical models developed by e.g. Whiten [1], Eloranta [2] and Evertsson [3]. Crusher
manufacturers commonly use different types of regression models based on test data to predict
performance output. These models are unique for each type of crusher and a number of
correction factors are normally used to adjust for application specific aspects such as rock type
and strength. As these models are only partly based on mechanistic principles they are more
suited for designing circuits rather than designing new crushers.
A simplified expression of the hydrostatic pressure and how it relates to the crushing force and
angle of action can be seen in Eq. 1. The crushing force is a representation of the accumulated
forces from each interaction between rocks and the mantle under compression. If the crusher
chamber is evenly fed with material with homogenous properties the pressure should be
relatively constant. However, if a deviation occurs at some position or over an angular section
where e.g. there is less material or material of other size and shape, the force response changes
on the mantle. This force response variation would be observable in the momentary oil pressure
signal. In other words the shape of the pressure signal gives information regarding the current
force response upon the mantle.
Eq. 1
5 |
Chalmers University of Technology | 2.1.1 Influence of parameters
Cone crusher related parameters can be defined in four groups;
Mechanical Design Parameters (MDP) – Static parameters established in the design
and commissioning process not possible to influence actively in operation without
substantial re-engineering.
Operational Control Parameters (OCP) – Parameters that are possible to change
during operation in order to control and influence performance.
Semi-Operational Control Parameters (SOCP) – Parameters that are possible to
change, however only during shutdown or maintenance stops due to the need for e.g.
change of mechanical parts.
Operational Output Parameters (OOP) – Resulting parameters of the crushing
operation like e.g. power draw and pressure.
Close Side Setting – When decreasing the CSS the product size distribution evidently gets finer
as the gap, limiting the size of rocks leaving the chamber, is reduced. In Hydrocone type
crushers the CSS can be adjusted during crushing operation as the main shaft vertical position is
controlled by hydraulics. It is hence an OCP parameter and can be actively used as a control
parameter. In Symons type crushers however the top shell needs to be adjusted in order to
change the CSS. This can to date only be done when the crusher is not under load and should
therefore be categorized as a SOCP parameter for these crusher types.
Eccentric speed – When increasing the eccentric speed of the crusher mantle the material will be
subjected to an increased number of compressive events. As a consequence each compression
event will be performed at a lower compression ratio as the ith event will occur at a higher
position in the crushing zone. It has been experimentally shown that a lower compression ratio
results in a better shape [4]. Also, due to the increased number of compression events, the
particle size distribution will be finer [5]. However, when increasing the number of events the
particles will move slower down through the crushing zone. Conclusively, higher speed results
in a relative increase in shape quality and a finer product but with the sacrifice of reduced
throughput. Historically the eccentric speed can normally not be changed during operation
without changing belt drive and is therefore a MDP/SOCP parameter. However, by installing
frequency drives the eccentric speed can be adjusted during operation and hence converted to
an OCP parameter. This has been done successfully by Hulthén [6] in order to actively control
the speed as an enabler for performance optimization.
Eccentric throw – The eccentric throw controls the amplitude of the sinusoidal rotations around
the pivot points X- and Y- axis. The geometrical motion is achieved by using an eccentric
bushing, see Figure 2. The throw can be adjusted within a specific range during shutdown by
turning the bushing and is defined as a SOCP parameter.
Liner design – All commercially available crusher models come with the choice of a set of liner
designs ranging from fine to coarse profiles. Choice of profile is governed mainly by the feeding
size distribution and desired product size distribution. The liner surfaces wear and are replaced
after a couple of hundred operation hours depending on the abrasiveness of the rock type.
7 |
Chalmers University of Technology | Cross-sectional area [m2]
Figure 4 - Schematic illustration of the cross-sectional area, see Figure 3, at every z-coordinate displaying the choke
level position. Changes to the liner profile will evidently result in a changed shape of the cross-sectional area plot.
Further on this means a new choke level position and a new operating condition.
Choke level – The choke level is an indirect variable not possible to measure during operation.
It is the level or vertical position in the crushing chamber which limits the particle flow through
the crushing chamber. If considering the cross-sectional area in the 𝑥 𝑙 between the
mantle and concave for all values (see Figure 3), as illustrated by Figure 4, a narrow section
exists. Below this narrow level the gap decreases, however as the radius increases the cross-
sectional area actually increases. This means there will be more space for particles to be
crushed. In effect observations show that the choke level is a transition zone where the
breakage mode shifts from interparticle breakage to single particle breakage [3]. The choke level
is besides the geometrical features of the liner design, also a function of the eccentric speed,
CSS and eccentric throw.
Power draw – Based on how the crusher is run and how much material is introduced into the
crushing chamber, a specific amount of energy will be used to break rocks every second. The
electric motor will always try to maintain the set speed and will pull more or less current based
on the load on the mantle and main shaft. If adding up the torque components from all particle-
mantle interactions, obtained from the crushing force needed to break each rock, this would be
the resistance the motor needs to overcome (plus mechanical losses). The power draw is an
OOP parameter and is used for monitoring how much work the crusher is doing, often in
relation to its optimum performance capability.
Hydraulic pressure – Most modern crushers are equipped with hydrostatic bearings where the
pressure can be monitored using pressure gauges. The pressure level gives an indication of the
crushing force on the mantle according to the relationship in Eq. 1. The condition of the
pressure signal also holds information regarding the crushing operation. High amplitude
suggests that the mantle is performing different amounts of crushing work at each
circumferential position. Reasons for this could be miss-aligned feeding of the crushing chamber
or segregated feed. If the crusher chamber lacks material, i.e. is not choke fed, the pressure will
drop when the mantle reaches that position. In the case of segregation the feed size distribution
8
]m[
etanidrooc-Z |
Chalmers University of Technology | will be different at all circumference positions inevitably giving different bed confinement
characteristics hence different force response.
2.1.2 Feeding conditions
The presentation of rock material to the crusher, i.e. feeding of the crusher, is one of the most
crucial operational factors. Normally vibrating feeders or belt conveyors are used for feeding
material to the crusher feeding box. In many cases this arrangement is not sufficient in order to
achieve satisfying feeding conditions.
Figure 5 - DEM simulation of the feeding of a cone crusher. The picture clearly shows the segregation behaviour as
well as the proportionally higher amount of material in the right section of the crusher chute. (Unpublished work by
Quist)
As implied in the previous section many crushers are to some degree badly fed and experience
two different issues; misaligned feeding and segregation. Misaligned feeding means that the
material is not evenly distributed around the circumference hence there will be different
amounts of rock material at all φ-positions. When operating under full choke fed condition the
misaligned feeding is less of a problem. Segregation means that the particle size distribution will
be different at φ-positions around the circumference. The reasons for these issues are coupled to
how material is presented and distributed in the crusher rock box. When using a belt conveyor
as a feeder the material can segregate very quickly on the belt. This segregation propagates into
the crusher and is amplified when the rock stream hits the spider cap, see Figure 5. The spider
cap acts as a splitting device causing coarse particles to continue to the back of the crusher and
fine particles to bounce to the front. As the material has a horizontal velocity component in
order to enter the crusher a large fraction of mass will end up in the back and a lower fraction of
mass in the front. This effect is less when using vibrating feeders instead of conveyors as the
horizontal velocity component is lower. For a more thorough investigation and description of
this issue and ways to resolve it, the reader is advised to see Quist [7].
The operational effects of these issues are that the crusher effectively will perform as a different
crushing machine at all φ-positions. As already stated this means that the hydraulic pressure will
vary as the mantle makes one revolution. The result can be fatigue problems leading to main
9 |
Chalmers University of Technology | shaft failure, cracks in the supporting structure, uneven liner wear, poor performance and
control as well as many other problems due to that the machine is run in an unbalanced state.
2.1.3 Rock mass variation
For most aggregate quarries as well as mining sites the mineralogical content of the rock mass
varies throughout the available area. This results in variation of the rock characteristics
momentarily as well as on a long term basis. Meaning that the best operating parameters today
may not be optimal next month, week or maybe even next hour [6]. When varying the rock
competency the size distributions produced up-stream will slightly change giving new feeding
material characteristics.
2.2 The Discrete Element Method
DEM is a numerical method for simulating discrete matter in a series of events called time-
steps. By generating particles and controlling the interaction between them using contact
models, the forces acting on all particles can be calculated. Newton’s second law of motion is
then applied and the position of all particles can be calculated for the next time-step. When this
is repeated it gives the capability of simulating how particles are flowing in particle-machine
systems, see Figure 6. It is also possible to apply external force fields in order to simulate the
influence of e.g. air drag or electrostatics. By importing CAD geometry and setting dynamic
properties the environment which the rock particles is subjected to can be emulated in a very
precise manner. This gives full control over most of the parameters and factors that are active
and interesting during a crushing sequence. Also, due to the fact that all particle positions,
velocities and forces are stored in every time-step, it is possible to observe particle trajectories
and flow characteristics.
Figure 6 - Illustration of the DEM calculation loop used in EDEM
2.2.1 Approaches for modelling breakage
As the main purpose of this work is to break rocks in a simulation environment the choice of
breakage model is important. Two different strategies dominate when it comes to modelling
rock breakage in DEM – The population balance model (PBM) and the bonded particle model
(BPM). The population balance model is based on the principle that when a particle is
subjected to a load exceeding a specific strength criterion it will be replaced by a set of progeny
particles of predetermined size. The strength criteria values and progeny size distribution are
gathered from calibration experiments. This method is suitable for simulating comminution
systems where impact breakage is the dominant breakage mode. The method has however been
10 |
Chalmers University of Technology | used for modelling cone crushers as well [8]. The BPM method is based on the principle of
bonding particles together forming an agglomerated cluster. Despite the fact that the PBM
approach is more computationally effective and easy to calibrate the BPM approach is chosen
for this work. The first reason is that the performance of a cone crusher is highly dependent on
the particle flow dynamics within the crushing chamber. When using the PBM approach the
particle dynamics are decoupled as the progeny particles are introduced at the same position as
the broken mother particle. Hence the model cannot take into consideration particle movement
as a result of a crushing sequence. This is not a problem in the BPM approach as the meta-
particles are actually broken apart into smaller clusters. This leads up to the second reason
which is that the PBM model is not based on simulating a crushing sequence but is basically
only making use of the possibility to calculate forces on particles in DEM. The breakage itself is
governed by an external breakage function. In conclusion the PBM approach basically uses the
DEM model as an advanced selection function.
2.2.2 Previous work on DEM and crushing
A few publications exist on the topic of using DEM for rock crushers and cone crushers in
particular. In the case of impact crushers Djordjevic and Shi [9] as well as Schubert [10] have
simulated a horizontal impact crusher using the BPM approach. However in both cases
relatively few particles have been used and the geometries are very simplified. A DEM model
for simulating rock breakage in cone crushers has been presented by Lichter and Lim [8].
However, this model was based on a population balance model (PBM) coupled with a breakage
function. This means that when a particle is subjected to a load greater then a threshold value it
will be considered broken and the model replaces the mother particle with a set of progeny
particles, sized according to the breakage function. This approach is very powerful in respect of
computational efficiency but the actual breakage events are controlled by statistical functions,
hence it is possible to tune the simulation towards performing according to experiments without
knowing if the particle flow through the chamber is correct. Another aspect is the relationship
between loading condition on a particle and particle breakage. Depending on a 1:1, 2:1, 2:2 or
3:1 point loading between two plates the rock will break differently. Generally, a rock particle
subjected to a load will either be; undamaged, weakened, abraded, chipped, split or broken. In
the PBM approach only the last effect is considered. Therefore current work is based around
the more computational cumbersome Bonded Particle Model (BPM). This method has been
previously utilized by the author for modelling a cone crusher [7, 11] as well as a primary
gyratory crusher [12].
2.2.3 Hertz-Mindlin Contact model
The Hertz-Mindlin contact model, Figure 7 is used for accurately calculating forces for particles-
particle and particle-geometry interactions in the simulation [13]. The normal force component
is derived from Hertzian contact theory [14] and the tangential component from work done by
Mindlin [15].
11 |
Chalmers University of Technology | ⃗ Eq. 9
⃗ √ ⁄ √ 𝑣⃗ Eq. 10
√ Eq. 11
2.2.4 Contact model calibration
When using DEM for modelling breakage most of the focus is put on making sure that the
contact model governing the fragmentation corresponds to a realistic behaviour. However it is
very important to make sure that the contact model controlling flow behaviour is calibrated as
well. If the friction parameters are not correct the particles will flow in an incorrect manner.
When compressed the particles may e.g. slip and escape compression when in reality it would be
nipped and broken.
No generally accepted method exists for calibrating contact models towards good flow
behaviour. Hence a calibration device has been designed and built by CRPS [16]. A CAD
model of the device can be seen in Figure 8. The device consists of an aluminium mainframe
that holds a bottom section with a removable sheet metal floor and fixed sides. The top section
holds a hopper with variable aperture and angle as well as a sliding plane with variable angle.
The height of the top section can be adjusted. The different adjustment possibilities enable tests
with different conditions. It is very important when calibrating a DEM contact model that it is
independent of flow condition.
In Figure 9 an example of a calibration procedure can be observed. In the left picture the
particle flow has been captured using a high speed camera. By iteratively varying parameters,
simulating and comparing with the reference a decent set of values for the friction parameters
can be found.
13 |
Chalmers University of Technology | 2.2.5 The Bonded Particle Model
The BPM model was published by Potyondy and Cundall [17] for the purpose of simulating
rock breakage. The approach has been applied and further developed by Cho [18]. The concept
is based on bonding or gluing a packed distribution of spheres together forming a breakable
body. The particles bonded together will here be called fraction particles and the cluster created
is defined as a meta-particle. The fraction particles can either be of mono size or have a size
distribution. By using a relatively wide size-distribution and preferentially a bi-modal
distribution the packing density within the meta-particle increases. It is important to achieve as
high packing density as possible due to the problematic issue with mass conservation as the
clustered rock body will not be able to achieve full solid density. Also, when the bonded particle
cluster breaks into smaller fragments the bulk density will somewhat change as area new
particle size distribution is generated.
̅
𝑖
̅
𝑖
⃗ ⃗
𝑡 𝑖̅ 𝑖 𝑡 𝑖̅ 𝑖,𝑏 𝑏
𝑠
𝑏
2
𝑏 𝑏
(a) (b)
Figure 10 - Schematic representation of; (a) two particles overlapping when interacting giving a resultant force
according to the contact model seen in Figure 7. (b) two particles bonded together with a cylindrical beam leading to a
resultant force as well as normal and shear torques(modified from [17, 19]).
Bond breaks
𝑐
T 𝑠 Bond breaks
e 𝑐
n
si
o
n
𝑏
Shear
𝑠
𝑏
1 1
𝑠
Compression
Figure 11 - Schematic force-displacement plot of the different modes of loading on a bond beam. The stiffness's and
critical stress levels are also shown. (Modified from [18])
The forces and torques acting on the theoretical beam can be seen in Figure 10. The schematic
graph in Figure 11 illustrates the relationship between different loading modes (tension, shear,
and compression), bond stiffness and strength criteria. Before bond-formation and after bond
15 |
Chalmers University of Technology | breakage the particles interact according to the Hertz-Mindlin no slip contact model. The bonds
are formed between particles in contact at a pre-set time 𝑡 . When the particles are bonded
the forces and torques are adjusted incrementally according to the following equations:
⃗ Eq. 12
⃗ Eq. 13
Eq. 14
Eq. 15
Where,
𝑡
𝑡
𝑡
𝑡
Eq. 16
The normal and shear stresses are computed and checked if exceeding the pre-set critical stress
values according to the equations below:
⃗
̅ Eq. 17
⃗
̅ Eq. 18
In this work the critical strength levels are set to a single value defining the rock strength. For
future work it would be preferable to be able to randomize the critical bonding strength within
a specified range or according to a suitable probability function.
2.2.6 Future DEM capabilities
DEM is a very computational intensive method. The continuous development of CPU speed
enables larger particle systems to be modelled and by using several CPU processors in parallel
the computational capacity is improving. However, since 5-10 years back Graphics Processing
Units (GPU) have been adopted and used for computational tasks due to the high potential for
parallelization. While a normal DEM simulation commonly utilizes 2-16 CPU cores a high-end
GPU consists of 400-500 cores. If utilized effectively, this has the potential to dramatically
increase the computational capacity by 10-100 times. But it is not as easy as just recompiling the
source code and starting running on GPUs. The algorithm needs to be adopted to be run in
parallel on all the GPU-cores, a task which has been proven as difficult, but not impossible. The
developer community is vivid and the library of available functions is steadily increasing. A few
DEM vendors have beta versions of GPU-based DEM codes and these will probably be
available on the market in a few years.
16 |
Chalmers University of Technology | 2.3 Compressive breakage
It has been found by Schönert [20] that the most energy efficient way of reducing the size of
rock is to use slow compressive crushing. Each particle can be loaded with the specific amount
of energy needed to generate fracture resulting in progeny particles with a wanted size and
specific surface.
2.3.1 Single Particle Breakage
It is not possible to analytically calculate the internal state of stresses of a single irregularly
shaped particle subjected to compressive load. Hence stress based measurements of particle
strength are only valid for primitive regular shapes [21]. Hiramatsu and Oka [22] investigated
the tensile strength for irregular as well as spherical shapes and showed that the tensile particle
strength for an irregular shaped rock can be approximated by the following expression.
Eq. 19
This simple equation is derived from a more complex expression of the stress state of a sphere
subjected to compression. The numerator is defined as the critical force for failure times a
factor given by; the loading condition, geometrical features and poisons ratio. The denominator
is defined as a disc-area where D is the distance between the loading points. In this work this
approximate substitute particle strength is used when conducting single particle compression
tests in order to calibrate the DEM bonded particle model. The equation is very convenient
since it is possible to extract both the critical force as well as the distance between loading
points when conducting compression breakage tests, see Figure 12. This test-procedure will be
further explained in the Material model development chapter.
Table 1 - Contact loading point arrangements for single particle compression between two plates
Type Plane A Plane B
I. 1-point 1-point
II. 2-point line 1-point
III. 2-point line 2-point line
IV. 3-point plane 1-point
V. 3-point plane 2-point line
VI. 3-point plane 3-point plane
17 |
Chalmers University of Technology | When compressing an irregular shaped rock particle it will be pressed between two parallel
surfaces experiencing loading at specific contact points, see Figure 12 and Figure 13. The
number of contact points varies depending on the shape and orientation of the particle. In
theory a number of contact arrangements exist as demonstrated in Table 1. Some types are
more frequently observed than others. As an example consider type I where the particle is in
contact at only one position for each plate. This for example, is the case for a spherical particle
as described above. It is unlikely for an irregular rock particle to only have two contact points if
the experimentalist is not positioning the rock specimen manually in such a way until
compression begins. During experiments it was observed that type IV and V are the more
common loading point arrangements. It was also observed that when compressing a particle
between two plates an interesting phenomenon occurs; local positions of the particle subjected
to contact are often relatively sharp. In the initial phase of the compression the local stresses are
hence very high resulting in local crumbling breakage due to the brittle nature of rock material.
This increases the area of contact and influences the upcoming stress state in the body and
hence the breakage characteristics. This is also the reason why the otherwise statistically very
unlikely type VI point loading arrangement may occur.
F = 0 0 < F < F F ≤ F 0 < F << F F = 0
i i c i c i c i
D
Unloaded Loaded with a Loaded just before Loaded just after Particle broken
compressive force critical breakage critical breakage
Figure 12 - Schematic illustration of the different phases during a single particle compressive breakage test.
18 |
Chalmers University of Technology | Figure 13 - Photo of an amphibolite particle from the feed sample subjected to compressive breakage
2.3.2 Inter Particle Breakage
Inter particle breakage can be defined as the breakage mode where a bed of particles is
compressed and broken within a confined or unconfined space. During compression, forces
transmigrate though the bed from particle to particle creating a force network. The packing
structure is hence of interest when studying bed breakage. Several parameters influence the
packing structure of a material bed;
Particle size distribution
Particle shape
Internal friction
Wall friction
Solid density
When a bed of particles is loaded the particles re-arrange slightly until a static condition is
reached. The bed is then elastically compacted until particles start to fracture. Some research
has been conducted in the field of interparticle breakage in order to better understand the
complex mechanisms. Evertsson and Briggs [23] as well as Liu and Schönert [24, 25] have made
important contributions. The discrete element method has been used as a tool for investigating
interparticle breakage of mono-sized rocks and other agglomerates [26, 27]. Numerical FEM
simulations of interparticle breakage in a confined space have been conducted by Liu [28]. The
breakage of a bed of particles has been modelled in 2D FEM software in order to investigate
the fragmentation behaviour when compressing the bed. In the beginning of the compression
smaller particles are loaded in a quasi-uniaxial or quasi-triaxial compression mode. The smaller
particles have fewer contact points then the larger particles hence the stress field generated has
a higher local maximum stress resulting in crack propagation. Larger particles are surrounded
by smaller fragments and hence experience a high number of contact points. As the
displacement increases the larger particles will also experience high enough stresses to cause
Hertzian crack propagation.
The interparticle breakage in a cone crusher happens either in a confined or an unconfined
condition depending on the operation. An interesting question is how large the angular segment
is where actual confined breakage takes place as the mantle moves eccentrically if the feeding
condition changes.
19 |
Chalmers University of Technology | 3. METHOD
In this section all methods that have been applied or developed in the different phases of the
project are presented with the aspiration that the reader should theoretically be enabled to
reproduce the experiments and simulations. Focus will mainly be put on how methods and
theories have been applied in contrast to the previous section where the theoretical background is
introduced in a more general sense.
3.1 DEM as a CAE tool
The role for CAE tools at R&D departments in all industries is growing steadily. With new
capabilities to simulate and evaluate design decisions during concepts or detail development,
time and resources previously spent on expensive prototypes and field testing are better spent.
The DEM method is a fairly new tool which is in its cradle when it comes to systematic usage by
large R&D departments. Hence, few methodologies or frameworks exist for how or when to
use DEM. DEM is one of many computational engineering techniques so the methodology
emerged in the field of e.g. FEM could be interesting to review. As this method has been
around for a much longer time a lot of research has been conducted on the management around
FEM analysis.
During development and design of machines or processes which interact with granular material,
it is commonly difficult to predict the behaviour and performance of the system. The types of
situations where analysis and simulation are needed can roughly be categorized as follows;
Evaluation
Problem solving
Optimization
Fundamental understanding
These four can be of interest both for new products as well as for existing products and
implementations. It has been found in this work that in order to be effective and fully leverage
the power of DEM it is crucial to adopt a statistical approach. When it comes to optimization
and fundamental understanding where a high number of parameters need to be studied, it is
recommended to use the design of experiment approach. As computational resources are usually
scarce , fractional factorial analysis [29] is a good way of reducing the simulations needed in
order to draw conclusions. In the case of e.g. concept evaluation or problem solving sometimes
one single or very few simulations are needed in order to give enough information to make
decisions. A framework for how to utilize DEM as a concept evaluation tool for design and
problem solving of bulk materials handling applications, has been presented by Quist [7]. The
work shows that the resolution or quality of the DEM model can be used actively for different
purposes. When the objective is to do a quick concept screening a very simple model can be
setup in order to give some information regarding basic flow trajectories and so on. Such quick
simulations can be setup and simulated within one hour. By working in an iterative manner with
the concepts and raising the resolution and quality of the DEM simulations accordingly the
probability of succeeding with the development efforts is greatly enhanced.
3.2 Bonded Particle Model Rock Population
A rock material consists of a number of different minerals and crystalline structures with
different mechanical properties. When considering the properties of a rock type the proportion
of the various minerals is subject to analysis commonly by doing a petrographic analysis. The
petrographic composition of the rock material in Kållered can be seen in Table 2. An example
of the microstructure of granite rock can be seen in Figure 14. As can be seen it is constituted by
20 |
Chalmers University of Technology | a number of different minerals. The mechanical property of the rock mass depends on the
properties of each constituent, proportion, the grain architecture and size as well as weathering
effects, cracks and defects.
Figure 14 - Illustration of the heterogeneous microstructure of a typical granite rock
Table 2 - Petrographic composition of the rock material in Kållered
Fraction (mm) Quantity Proportion (%) Meas. Uncert. (±%) Mineral type
2-4 171 17 2.3 Quartz
457 46 3.1 Feldspar
117 12 2.0 Mica (Biotite)
187 19 2.4 Amphibolite
67 7 1.5 Pot. alkali-reactive material
1 0.1 0.2 Ore-minerals incl. sulphides
3.2.1 Generating a bi-modal particle packing cluster
Different types of size distributions give varying packing performance as well as number of
contact points as illustrated in Figure 15. Particles can be arranged in a number of different
bravais lattice systems; [30]
Simple cubic (SC)
Face-centred cubic (FCC)
Body centred cubic (BCC)
Hexagonal closed packing (HCP)
These packing structures mainly apply to crystalline structures made up of mono-sized or
double mono size particle structures. The arrangement of the particles or atoms, together with
the nature of bonding forces characterizes many of the mechanical properties of a material.
When building a synthetic rock in the DEM environment the ambition is to capture as many of
the features of real rock material as possible. If there was no computational constraint, one
would try to model every atom, molecule or mineral grain. However, currently there is a trade-
off between the number of meta-particles we want to model and how many particles we put in
each meta-particle. Most of the work and simulations done on rock breakage using bonded
particle models focus on the breakage of a single particle in e.g. a uniaxial strength test. In this
case it is possible to capture a fairly accurate breakage mechanism using all the available
particles for one rock specimen. This approach is of course irrelevant when trying to create a
rock population for crushing.
21 |
Chalmers University of Technology | - -
- - -
- - - - - - - - - -
-
-
-
-
-
-
-
-
-
-
-
- -
- -
- -- --
-
---
- --
- -- - -- -- -
- -
-
- -
--
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- - -
-
-
-
- -- - ---- --
- -- - -
--
-
-
-
- - -
-
i: Mono distribution ii: Gaussian distribution iii: Bimodal distribution
Figure 15 - Schematic illustration of three different packing structures given by different types of size distributions.
When applying a bonding model to these clusters the contact lines seen in the illustration would become boding
beams. Hence the strength characteristics of a particle built up by a packed set of spheres strongly depends on the
packing structure and size distribution.
In this work it was found that the most suitable approach to model the breakage of rock
particles is to use a bi-modal distribution with relatively large particles in the high end with
smaller particles acting as cement in between, as demonstrated in the illustration to the right in
Figure 15. An example of the contact network generated from a particle bed with bimodal
distribution can be seen in Figure 16. When activating the bonding function in the simulation
these contacts are converted to bonds.
Figure 16 - Contact network generated by a particle bed with bimodal distribution. The colours represent the length of
the contact vector and show that the body is highly heterogeneous.
The following procedure has been developed for creating a particle packing cluster with a
bimodal distribution suitable for a BPM in EDEM;
i. Create two cylinders with appropriate diameter to fit the wanted end particle size. One should be the
container and the other the particle factory. The container cylinder should be placed around origo so that
when a 3D particle geometry later is imported it will be fully surrounded by particles.
ii. Define a material with low static friction and a higher stiffness then used later.
iii. Define a spherical particle named Fraction with a nominal particle size between the coarse and fine modals
of the distribution. The contact radius should be set slightly higher than the physical radius.
iv. Define a particle factory for the coarse end of the distribution with a capped normal distribution with e.g.
µ=2 and capped in the range 1<µ<3. Set the time stamp to t =0s.
start
v. Define a particle factory for the fine end of the distribution with a capped normal distribution with e.g.
µ=0.8 and capped in the range 0.6<µ<1. Set the time stamp to t =t .
start step
22 |
Chalmers University of Technology | vi. Let the particles settle forming a loosely packed bed see Figure 17. Due to a higher stiffness the overlaps
will be reduced compared to if using the actual stiffness later. By doing so the risk of a preloaded bed is
lowered.
vii. Define a selection space by importing rock shaped geometry, see Figure 17.
viii. Export the particle positions (X,Y,Z) and radius for all particles within the selection space
ix. Reorganize the exported data in the following way;
271
0.0165135 0.0097095 -0.00677124 2.418
0.00664328 -0.0377409 0.00264027 2.123
-0.0288484 0.00226568 -0.00600659 1.594
…
The first position is how many particles the cluster contains. The first, second and third rows are X, Y, Z
coordinates and the fourth is the scaling factor.
Figure 17 - The picture to the left show a bimodal particle bed created in step (vi) above. The middle picture shows
the 3D rock selection space imported in step (vii). The picture to the right shows the selection of particles within the
selection space as done in step (viii)
3.2.2 Calibration of the bonded particle model
As mentioned in previous chapters rock breakage experiments are often conducted on primitive
shapes such as a cylinder in the uniaxial strength test due to the possibility to calculate the
internal stress state. In this work single particle breakage tests have been conducted on a set of
rock particles from the test material sample. The critical force for failure and the rock size was
recorded. The particle strength given by Eq. 19 was applied in order to find a strength
distribution. The calibration work performed in this work will be presented in detail in the
Material Model Development chapter.
3.2.3 Introducing meta-particles by particle replacement
In EDEM particles are generated to the simulation environment by using particle factories.
Usually geometry such as a box, cylinder or a plane is defined as a particle factory and the user
can define what particles should be created at what rate. This approach is practical if the
purpose of the simulation is to e.g. continuously generate material to a conveyor or create
100’000 particles at once in a mill. However this way of introducing particles is not sufficient
when working with multiple dynamic BPM models.
It is possible to define custom factories in EDEM. In this work a special approach is used for
creating the meta-particles. First a set of dummy particles is created for each meta-particle size
class using standard box geometry as factory. These particles are single spheres and have to be
larger than the meta-particle cluster. When a set of dummy particles has been generated each
23 |
Chalmers University of Technology | dummy particle is used as a custom factory. A custom factory, called Particle Replacement
Factory creates fraction particles according to the coordinates and sizes defined in the meta-
particle cluster coordinate file. Fraction particles are placed inside the dummy particles
according to the local coordinate system of the dummy particle. This is why it has to be larger
than the cluster. When the fraction particles are in place, the dummy particle is removed. An
example of this procedure can be seen in Figure 18.
Figure 18 - Snapshot from EDEM showing a stage in the particle replacement procedure. In the picture a set of meta-
particles has been created and a new set of large dummy particles can be seen for the next replacement action.
3.3 Industrial scale crusher experiments
Industrial scale experiments have been conducted at a quarry owned by Jehander Sand & Grus
AB located in Kållered, south of Göteborg. The site has several cone crushers in process for
both secondary and tertiary operations. The secondary crusher, a Svedala H6000 cone crusher,
was chosen for the experiments. The ambition with the tests has been to fully capture all
possible data concerning both the feed material, machine operation and product material.
When conducting tests on a secondary crusher operation normally it is problematic to sample
the feed material and do sieve analysis due to the large sized rocks. Rock particles are up to 250
mm in size with a considerable mass for each rock. This has consequences on the statistical
significance as a minimum number of particles should be sampled for each size class. If
following the recommendations in European standards several tons of feed materials need to be
sampled. This is not feasible hence as much material as possible has been sampled and sieved.
While digging off material from the belt is a relatively simple task, sizing the rocks bigger then
45mm is cumbersome. It is very rare with mechanical sieves with aperture size larger than
90mm. The lab on site has a vibrating sieve with largest aperture of 45 mm. In order to size
particles larger than this a set of square sizing-frames was designed and manufactured, see
Figure 19.
24 |
Chalmers University of Technology | Figure 19 - Feed sizing with sizing-frames designed in the project
In Table 3 a test plan for the full scale experiments in Kållered can be seen. Five different runs
have been performed at CSS ranging from 34 to 50 mm. Belt cuts are extracted from the
product belt for each run and the feed sampled for the first, third and fifth run.
Table 3 – Industrial scale experiment test plan
SVULLO EXPERIMENTS
Time frame with feed cut Run CSS Samples D. Time [min]Tot. Time [min]
Activity Quist Activity Åberg Time Up-Time D. Time RUN1-34 34F+P 12,5 22,5
1Set CSS 0,5 0,5 RUN2-38 38P 10 15
2 Lead CSS calibration 5 5 RUN3-42 42F+P 12,5 22,5
4Start DAQ Measurement 1 1 RUN4-46 46P 10 15
5Run Crusher 3 3 RUN5-50 50F+P 12,5 22,5
6Make time note 0,5 0,5 57,5 97,5
7Stop DAQ Measurment Stop Belts 1 1
8 Lock OFF belts 0,5 0,5 Samples Expected weight Item Quantity
9Do belt cut (Feed) Do belt cut (Product) 10 10 S1-34-P 40 Sampling Equipment
10 Lock ON 0,5 0,5 S2-38-P 40 Buckets (20l) 30
11Start belts 0,5 0,5 S3-43-P 40 Sack 10
22,5 10 12,5 S4-46-P 40 Brush 1
S5-50-P 40 Shovel 1
S1-34-F 40 Spade 2
Time frame without feed cut S3-43-F 40 Tape 2
Activity Quist Activity Åberg Time Up-Time D. Time S5-50-F 40 Tape measure 2
1Set CSS 0,5 0,5 320kg Scale 1
4Start DAQ Measurement 1 1 Sampling Processing Equipment
5Run Crusher 3 3 Oven 1
6Make time note 0,5 0,5 Sieve Shaker 1
7Stop DAQ Measurment Stop Belts 1 1 Corse Sieves 6
8 Lock OFF belts 0,5 0,5 Shape Index meter 1
9Do belt cut (Product) Do belt cut (Product) 7,5 7,5 Scale 1
10 Lock ON 0,5 0,5
11Start belts 0,5 0,5
15 5 10
25 |
Chalmers University of Technology | A test-sequence was created in order to manage the experiment. This was done due to several
reasons such as personal safety; minimize the risk of data and sample loss; quality of samples
and time management. Before the first test the process was operated until it reached a steady
condition. Then a dry run was performed in order to test each action. The tests followed the
following sequence of actions;
1. Set CSS
2. Start feeding
3. Run until choked condition
4. Start data logging and run for 3 minutes
5. Stop incoming feed
6. Stop data logging
7. Stop conveyors
8. Perform lock-out on conveyors
9. Do belt cut
10. Rendezvous at station and lock on
All product samples were handled in plastic buckets with handles and lids that prevent moisture
from escaping, see Figure 20. Each bucket was weighed after the experiments as a control
measure and as a reference for moisture content. The feed samples were handled in tough
reinforced polymer bags due to the large sized rock particles.
Figure 20 - All the material sampled during the experiments placed in the lab before sample processing.
The product samples have been processed in accordance with European standard EN933-1.
First each sample was sieved using the large vibrating sieve with an 8mm bottom deck and 63
mm top deck. Each sample was hence split at 8 mm. The large size fraction was simply weighed
due to the low amount of moisture in the large size fraction. The minus 8 mm material was split
down to 2+2 kg and dried for 2 hours. Each 2 kg sample was then sieved in a conventional
cylindrical vibrating screen in order to retrieve the total size distribution from 63 µm to 63 mm.
One of the product samples after the coarse sieving can be seen in Figure 21.
26 |
Chalmers University of Technology | Figure 21 - Picture showing each size class during coarse sieve processing as well as the minus 8 mm material.
In Figure 22 one of the feed samples can be seen. All rocks larger than 45 mm have been
individually tested in the sieve-frames and put in the corresponding box. The picture also gives
an indication of the size distribution of the feed.
Figure 22 - The picture shows each size class from the manual sieve analysis of one of the feed samples.
3.4 Crusher geometry modelling
One of the most difficult obstacles to overcome when trying to simulate and replicate the
behaviour of a real crusher is to create a good geometrical model. The easy method is to use
nominal CAD geometry. However these geometries do not take wear or liner design changes
into consideration. Even if it is known what type of mantle and concave should be installed it is
very difficult to know for sure when looking at the liners in operation. Also it may be very
difficult to get hold of the CAD geometry for each specific liner profile.
In this project this was solved by 3D-scanning both the mantle and the concave two weeks after
the experiments had been performed. The scanner used is a FARO FOCUS3D laser scanner
provided and owned by Roctim AB. The ambition was to perform the scanning inside the plant
workshop in a controlled environment. However due to operational issues on site, the liners
were never moved. Hence the scanning was performed outdoors without possibility to arrange
the liners in a suitable way, see Figure 23.
27 |
Chalmers University of Technology | Figure 23 - Left: test scan of a mantle in the mechanical workshop. Right: position of the concave and top frame when
scanning. The concave was positioned in a slope hence the scanning procedure was problematic.
In Figure 24 a planar view of the 3D scan of the concave is shown. The scanner was placed
inside the mantle at two positions in order to capture the full concave geometry. Due to the
position on the ground it was difficult to get a high quality scan. If the concave would have been
placed inside the workshop on a support structure it would have been in level and possible to
clean before scanning.
Figure 24 - Snapshot from the 3D-scanning post-processing software showing the unwrapped model of the concave
and spiderarms with a color map applied to it.
Ideally when scanning a mantle it should be positioned as seen in Figure 23. However the
mantle of interest had to be scanned on its position after maintenance hence only a section was
captured as shown in Figure 25.
28 |
Chalmers University of Technology | Figure 25 - Snapshot of the mantle from the 3D-scan post-processing software.
Since it was difficult to capture the full mantle and concave geometries an alternative approach
was used for creating a representative liner profile. From both the mantle and concave scan
data a set of section samples was extracted and imported to CatiaV5. By drawing spline curves
on these sections and finding a best mean a representative profile has been found. The final
mantle and concave surfaces were generated by revolving the spline profile around the centre
axis.
3.5 Crusher data acquisition
A data acquisition system has been developed for sampling data at high frequency from the
available crusher sensors. Pressure, power draw, shaft position and temperature signals have be
sampled by using opto-isolators splitting the signal from the installed crusher control system. In
this work only the pressure and power draw signals have been analysed. The motive behind
using a secondary data acquisition system instead of extracting data from the installed control
system is based on the suspicion of signal aliasing. The installed system samples data at 10 Hz
which is a too slow frequency to capture the true nature of the signals as will be shown in the
next chapter.
Figure 26 - NI USB-6211 data acquisition card
29 |
Chalmers University of Technology | A multifunctional data acquisition card (model: NI USB-6211) from National Instruments was
used for sampling the signals, see Figure 26. The card is connected via USB 2.0 interface to a
laptop with the NI software LabVIEW. A simple program was developed using block
programming language. The program is based on three functionalities;
Data capturing and conversion – A function is setup to acquire the signals from the
DAQ card and make them available for the program. Then the signals are separated
and converted/calibrated from 1-10 V to the correct unit. The calibration factors are
based on sensor specific parameters.
Data logging – The calibrated signals are coupled to a logging function that, when
enabled, continuously writes data to a log file until disabled.
Graphical interface – In order to enable online monitoring of the crusher signals a
simple interface was designed. The interface also contains fields for setting the scaling
parameters for each signal as well as a data log trigger button. The graphical interface
can be seen in Figure 27.
Even though the DAQ system design was relatively straight forward there were a number of
practical difficulties that had to be solved before the system operated as anticipated.
Figure 27 – top: LabVIEW graphical interface with functions for displaying and logging data. Bottom: Data acquisition
system setup at the crusher control room
30 |
Chalmers University of Technology | 4. CRUSHER EXPERIMENT
The aim of the following section is to present the results from the industrial scale experiments
performed in the project. The data shown will be presented and commented on independently
from the simulation results.
4.1 Size reduction
The product particle size distribution for the five different tests as well as the feed particle size
distributions can be seen in Figure 28. As anticipated the product gets finer when reducing the
gap setting apart from the CSS42 sample that deviates from expectations. The reason for this
deviation is unknown but could be either related to a mistake in the sampling, sampling
processing or the post processing. It could also be due to stochastic variation in the feed. As can
be seen the feed samples differ relatively much in the CSS42 feed sample which could also be
the reason for the deviation. As previously mentioned a very large feed sample is preferred in
order to achieve statistical significance, hence the three different samples have been combined
as a representation of the total feed sample.
100,0%
CSS-34-P
90,0% CSS-38-P
CSS-42-P
CSS-46-P
80,0% CSS-50-P
FEED TOT
CSS-34-Fd
70,0%
CSS-42-Fd
CSS-50-Fd
]% 60,0%
[ gnissaP
evitalum
50,0%
uC
40,0%
30,0%
20,0%
10,0%
0,0%
0,1 1 10 100
Aperture Size [mm]
Figure 28 - Feed and product particle size distributions for the five different CSS settings.
The throughput capacity for the five different tests can be seen in Figure 29. The data collected
during a previous survey on the same process is also shown as a reference. Both data sets
suggest a non-linear relationship between close side setting and capacity. An interesting feature
of the curve shape is the mid peak at 42 mm for the current tests and 44 mm for the old survey.
The 2 mm difference may be due to difference in feed material or a gap calibration deviation. A
possible explanation to the raising trend for higher CSS is that the cross-sectional area at the
choke level gets slightly larger when increasing the gap setting.
31 |
Chalmers University of Technology | 500,00
Current Survey
Previous Survey
450,00
400,00
350,00
300,00
]h
/T
[
ytica250,00
p
a
C
200,00
150,00
100,00
50,00
0,00
34 36 38 40 42 44 46 48 50 52
Close Side Setting [mm]
Figure 29 - Capacity for the five different CSS settings. Also capacity data from a previous survey conducted on the
same machine is shown for reference.
Even though the particle size distribution plot shows that the material is finer for lower CSS it is
more easily displayed by looking at the reduction ratio, see Figure 30. The reduction ratio is
defined as the 50th percentile for the feed divided by the 50th percentile for the product. For
example the F equals 67 mm and P for CSS at 34 mm is 19.5 mm which gives a reduction
50 50
ratio of 3.44. The data shows a strong negative linear trend when increasing CSS. The
correlation coefficient value is relatively high even though the CSS 42 mm deviates from the
trend as described previously. If combining the insights from both the capacity and reduction
ratio plots we can see that for lower CSS the rock material is subjected to more crushing to the
expense of lower throughput capacity.
Reduction ratio F50/P50 vs. CSS
4
y = -0,0806x + 6,2385
3,5
R² = 0,9458
[ o-] 3
ita2,5
r
n 2
o
itc1,5
u
d
e 1
R
0,5
0
32 34 36 38 40 42 44 46 48 50 52
CSS [mm]
Figure 30 - The reduction ratio for the five different CSS settings showing a negative trend when increasing the CSS.
4.2 Power draw
The power draw signal has been sampled at 500 Hz by using the developed DAQ system. The
sampled signal during 120 seconds of operation for the five tests can be seen in Figure 31. The
32 |
Chalmers University of Technology | signal amplitude is very high for all tests which normally indicate poor operation. The initial
ambition was to sample the power draw signal from the plant control system, however this data
was lost. When the test was conducted the power draw signal displayed by the plant control
system did not show this large amplitude. It is strongly suspected that the sampling frequency of
the control system is too low and that signal filtering is applied in such a way that the peaks are
effectively removed.
Figure 31 - Power signals for five different CSS over two minutes of operation. Even though the amplitude is very high
the linear trend lines show a distinct difference in mean power draw.
As the plot in Figure 31 is very compact it is difficult to see what is actually going on. However
the ambition is to show the clear difference in the average power draw when applying a linear
regression line on each signal. Due to the large number of data points it is impossible to see
what the signal looks like in detail. In Figure 32 the power draw signal for one second of
operation is shown. Here it is clearly seen how the signal fluctuates at a specific frequency. The
frequencies of the fluctuations are approximately 5 Hz which is the same as the mantle eccentric
speed. This means that the variation is somehow related to the movement of the mantle. Recall
from the theory chapter that the feeding of material is a vital aspect of a crusher operation. The
probable cause of the fluctuations observed is hence miss-aligned feed and segregation. At the
33 |
Chalmers University of Technology | peak angular position there is probably both a larger amount of material as well as a finer feed
size distribution that requires more energy to be broken.
Power draw during 1 second for 5 CSS
210
200
190
180
170
160
150
140
]W 130
k [
w
a11 12 00 P Po ow we er r_ _C CS SS S3 34
8
rd100
re 90 Power_CSS42
w
o 80
P Power_CSS46
70
60 Power_CSS50
50
40
30
20
10
0
0 0,2 0,4 0,6 0,8 1
Time [s]
Figure 32 - Pressure signal measured during one second showing the highly fluctuating signal for all CSS settings.
The specific time span chosen for each test is randomly picked.
When operating any type of process it is of the essence to run it under statistical process control
[29]. This generally means that variation from both stochastic as well as systematic sources
should be limited. When the variation is suppressed the challenge is to keep the process stable
and hence predictable. If the process is stable and predictable then it is possible to control it.
The standard deviation of the power draw signal can be seen in Figure 33. The lowest variation
can be seen for CSS 38mm.
Power draw Std vs. CSS
40
]W
35
k
[
.v30
e
D25
.a
tS20
w15
a
rd10
re
w 5
o
P 0
32 34 36 38 40 42 44 46 48 50 52
CSS [mm]
Figure 33 - The standard deviation of the pressure signal is shown for the five CSS settings.
As previously mentioned and also seen in Figure 31 the signal has distinct average values. In
Figure 34 the average power draw can be seen. The linear trend is very strong with a correlation
coefficient of 0.9837. The data indicates that the crusher is working harder i.e. putting more
energy into the rock bed per unit time, at lower CSS values. This also aligns with the previous
34 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.