University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Chalmers University of Technology | CHAPTER 5 - Results
Along with the Pain equation, the Master’s
thesis writers have developed the Pain
diagram which displays the errors with the
top 6 highest Pains, sorted in ascending
order (see Figure 12). This diagram gives a
clear view of the currently most alarming
errors in the process.
To facilitate the drilling down in stop
reasons, the frequency and downtime can
be displayed and analysed in a pyramid
Figure 12 – The Pain chart as it is displayed in OPT
diagram (see Figure 13).
Figure 13 – Pyramid diagram of Pain components
5.1.6 SETTING FILTERS & TARGET large amount of data analysis. These
determined levels shall be reviewed
VALUES
periodically due to possible process
changes.
In the calculation of Overall Utilisation,
Availability, Performance and Quality
target values are used. These target values
are critical components of the calculations
since they each have a large impact on the
results. This section will present the
methods used when determining the
individual target values.
OVERALL UTILISATION &
AVAILABILITY
The Overall Utilisation and Availability
calculations use filters to determine in what
type of activity the unit is engaging. For
instance, when a comminuting unit is
running above a certain power level, it is
assumed to be performing primary
production activities. In this project these
levels have been determined based on a
37 |
Chalmers University of Technology | CHAPTER 5 - Results
PERFORMANCE should therefore be taken into account
when setting target production rates.
When calculating Performance, a Target
Production Rate is used for comparison QUALITY
with the Actual Production Rate. The
Anglo American Equipment Performance When calculating Quality, a Target Particle
Metrics describes three ways of setting the Size is used for comparison with the Actual
Target Production Rate, these ways are as Particle Size. The Target Particle Size at
follows: different points in the process has been set
with advice from production experts at
1. Best demonstrated production rate,
MNC. The target is based on the particle
which is defined as the best demonstrated
size demands of the downstream
performance determined by calculating the
comminuting unit.
average of the five best monthly production
rates. 5.1.7 CLASSIFICATION
2. Equipment nameplate/design capacity
The equipment has been divided into four
rate.
different classification groups; A, B, C and
D. These have been made in order to
3. Budgeted production rate.
divide equipment based on their
The Master’s thesis writers propose a complexity and need of monitoring. The
fourth way of setting target production rate. available measuring points have also
The fourth way should be based on the affected what group in which the units were
design capacity of the unit and take placed. Table 10 presents the
changes on parts and settings that have measurements for the different
affected installed capacity into classification groups.
consideration. For a crusher, critical
changes could, for instance, be changing
chamber or closed side setting (CSS). Such
changes will result in a capacity change and
Table 10 – Classification of units
Classification A B C D
Equipment Circuits Comminution Supporting Supporting
units equipment equipment
(with available (without
measures) available
measures)
Details OEE OEE Overall Availability
Overall Overall Utilisation
Utilisation Utilisation Availability
Performance Performance Utilised
Quality Availability Uptime
Availability Utilised Uptime
Utilised Uptime MTBF
MTTR
Pain
38 |
Chalmers University of Technology | CHAPTER 5 - Results
5.2.3 OEE TABLE MODULE 5.2.4 PAIN MODULE
The OEE Table module displays the Pain is a way of visualising the combination
components of the OEE calculations of stop frequency and downtime for a unit
categorised by unit type (crusher, classifiers, in the monitored area. The Pain Module in
feeders and conveyors). The OEE Table OPT displays charts with the top 6 Pains in
module provides the calculations with a the monitored process area sorted in
transparency and can be used to acquire a descending order (see Figure 23).
more thorough understanding of the charts
At the top of the sheet, the monitored time
displayed in the OEE module. At the top
intervals are displayed.
of the module, the monitored time intervals
are displayed. Additional metrics displayed
The charts are sorted in descending order
in the module are Average Size and
to visualise which stop reasons currently
Average Deviation.
are causing the greatest damage to the
process. The stop reasons are labelled
The definition of any displayed component
below each bar in the chart. The unit for
can be viewed by hovering over the
the y-axis is thousand minutes, however,
component name (see Figure 22, where the
Pain is displayed as a unit-less metric.
pointer is hovered over “Availability”).
The displayed rates included in OEE have
a colour code which visualises the current
status. Green is for Satisfactory, yellow for
Poor and red for Alarming.
The colour limits for the different units can
be seen at the bottom of the sheet. The
limits shall be set based on the business
targets and can only be changed by the
administrator of OPT.
Figure 22 - Information appearing when hovering over Availability
41 |
Chalmers University of Technology | CHAPTER 5 - Results
At the top of the sheet, above the charts, ‐ Stop time: The time the unit stopped
Mean Time Between Failures (MTBF) and ‐ Start-up time: The time the unit started
Mean Time To Repair (MTTR) are up after the stop
displayed for the crushing unit in the area. ‐ Duration: The duration of the stop
‐ Stop reason: The reason for the stop
‐ Manually entered comment: Possible
manually entered comment by stand-by
official
‐ Downtime categories
- Downtime sub-category code
- Downtime sub-category name
- Downtime category code
- Downtime category name
‐ Scheduled/Unscheduled: Indicates if
Figure 23 – Pain chart as it is presented in the stop was scheduled or not
OPT
The downtime categories are used to
5.2.5 STOP TABLE MODULE
facilitate the allocation of stops in
alignment with the Anglo American
The Stop Table module presents the
Equipment Performance Metrics Time
information upon which the Pain Analysis
Model.
is based in a more detailed way (see Figure
24). The stop information is drawn from At the top of the sheet the total number of
stop reporting through the PI-database. stops and the total downtime in the chosen
The following information is presented to time interval are displayed. At the extreme
the user: top of the sheet, the monitored time
intervals are displayed.
Figure 24 – The stop table as it is presented in OPT
42 |
Chalmers University of Technology | CHAPTER 5 - Results
5.3 OPT METHOD successful combination. The group
members’ different backgrounds help to
In the following 3 create a broader view of the OPT output. If
section, the Design a a cross functional group is used, it also
results of the helps to increase the collaboration between
method
third phase of departments and limits dual work. The
this project will OPT Method’s incorporated action list
be presented. Figure 25 – Project facilitates tracking of issued actions, which
phase three - Design has been seen necessary in large
The result of the
organisations. Another positive effect is
third phase of this Master’s thesis project is
that cross-functionality unites the users
a methodology describing how to use the
among a common systematic problem
output of OPT in a productive way with
solving technique.
primary focus on finding root causes to
productivity limiting issues and follow up 5.3.2 FIVE WHYS
on action taken. The three supportive areas
Figure 26 – A model describing the intended usage of OPT
to the right of OPT (in Figure 26) are User The root cause finding technique, the 5
expertise, the 5 WHYs and OPT WHYs, is proposed as a suitable method to
Guidelines. These areas will be presented use to find root causes to issues
in more detail here. encountered when analysing the OPT
outcome. For more information on the 5
5.3.1 USER EXPERTISE
WHYs, see section 2.2.1 Five Whys.
In order to achieve a valid result when 5.3.3 OPT GUIDELINES
analysing the OPT output, a certain user
expertise is required. The user shall possess The Master’s thesis writers have developed
good knowledge of the process and have user guidelines for how to use OPT to
previous experience from working with the facilitate usage of the tool. The guidelines
process. It is also important that the user will be presented here and consist of the
possesses a systematic problem solving following documents:
technique.
‐ OPT Manual
The success of the OPT method is also
‐ OPT Meeting Procedure
determined by its users and their expertise.
‐ OPT Action List
It has been seen that a cross functional user
group, i.e. a group consisting of personnel
from different functional groups of the OPT MANUAL
organisation, including for instance both
The OPT Manual is a complete guide on
technical and engineering staff, is the most
how to use the tool. It provides a step-by-
43 |
Chalmers University of Technology | CHAPTER 5 - Results
5.4 OEE FOR A GENERAL
SINGLE STREAM PROCESS The general quality calculation was
customised to be valid for a single stream
Based on the general method of
comminution process. Instead of using
calculating OEE and the time definitions
good pieces as a measure, the Master’s
from the Anglo American Equipment
thesis writers have developed a new
Performance Metrics Time Model, a
method to calculate quality based on
customised version has been developed in
particle size. The Quality calculation is
this Master’s thesis project to better suit a
described in more detail in section 5.1.1
general single stream process. However,
Final OEE Calculation.
this is not the method used in the
developed tool, OPT. To view the OEE
calculation model used in OPT, see 5.1.1
Final OEE Calculation.
Uptime
Availability Equation 21
Total Time
Actual Production/Target Rate
Performance Equation 22
Uptime
Mean deviation from Target Size
Quality1
Target Size
n
Deviation from target size Equation 23
i1 i
n
1
Target Size
The general calculation of Availability was
customised in order to better fit a single 5.5 CRUSHER AND MILL STOPS
stream process. Instead of using Planned
REPORTING PROCEDURE
Production Time as the denominator in the
general OEE definition, Total Time is used,
To enable the Pain Analysis to be
which is the total hours available. The
performed, there is a need for daily reports
Availability is therefore determined by
of stops and their causes. There is a daily
dividing the Uptime by the Total Time.
record of crusher and mill stops which is
entered manually every morning. However,
The general performance calculation had to
when this project started, the stop data was
be customised to be valid for a single
extracted into a report on a monthly basis,
stream process. Performance for a general
and its lead-time was longer than required.
single stream process can be calculated as
stated in Equation 22. This gives in the
This means that the required data to
ratio between the targeted time to produce
perform the Pain Analysis existed but a
the actual tonnes produced and the actual
system to extract it daily was not in place
time consumed (uptime).
and the data required a great deal of
45 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
CHAPTER 6 - INTRODUCTION 6.1 CALCULATION
TO DISCUSSION MODEL
The mining industry has lagged behind The baseline for
manufacturing industry when it comes to developing the 1
Define a
process control and process optimisation. calculation
For instance, the mining industry does not model for OEE model
use modern methods when it comes to in a single
measuring and calculating equipment stream
performance metrics in an accurate way comminution Figure 29 – Project
and using this information to monitor and process was to phase one - Define
improve processes. One of the goals in this make it as generic
project has been to use some of the as possible and avoid making it site specific.
knowledge from the manufacturing During the pre-study in Sweden, the
industry and apply it in the mining industry. Master’s thesis writers developed a
functional method to calculate OEE in a
The discussion and conclusions are
single stream comminution process which
presented according to the three distinct
was tested and validated by using historical
project phases (see Figure 28). The initial
production data from MNC. This was a
challenge, as well as the first phase of this
good learning point which facilitated the
project, was to define an equipment
understanding of the characteristics of the
performance calculation model for a single
model and how certain parameters affect
stream comminution process. The second
output.
phase of this project was to develop a tool
(OPT) that uses the calculation model to Later, the Master’s thesis writers were
perform real time calculations of OEE and introduced to the Anglo American
other equipment performance metrics. The Equipment Performance Metrics, which is
third phase was to develop a methodology an internal company standard describing
describing how to use the tool output in the how to calculate equipment performance
organisation in a value creating way, such metrics including, for instance, OEE. The
as finding root causes to productivity model developed by the Master’s thesis
limiting issues. writers was found to be well aligned with
the company standard which is very good.
In the following sections a discussion on
The standard, however, was not
findings from the different phases of the
comprehensive enough regarding quality
project will be presented as well as
definitions and calculations. Therefore, the
conclusions drawn, answers to the research
quality definitions from the thesis writers’
questions, observations, recommendations
OEE model were adopted into the
for the organisation and finally future
company’s standard OEE definitions. The
research proposals.
OEE model was continually under
Define a calculation Design a method
1 Develo2p a tool that 3
model for OEE & describing how to use
calculates OEE &
other equipment the tool output in the
other equipment
performance metrics organisation with
performance metrics
in a single stream primary focus on
in real time
process finding root causes
Figure 28 - Project phases
48 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
development during the project and as weight, hence all are equally important
knowledge in the area grew, the model was when it comes to OEE. The OEE provides
refined and additional parameters were a good measure of the status of a unit;
added. however, it cannot tell what causes the
OEE number or how the OEE can be
OEE is a good performance measure, but it
changed. By looking into the three included
is important to not read it as one parameter;
parameters, a slightly better view of the
it is actually four. The individual
current unit status will be provided. Still,
parameters give a broader understanding of
answers to possible issues will be hard to
the equipment’s performance and provide
determine. To give a more inclusive picture
different approaches as to how the
of the unit status, two additional
equipment’s performance can be improved.
parameters are presented in OPT -
Availability and Utilised Uptime.
The downside of OEE is that it cannot
provide the user with the reason for an
eventual increase or decrease of the
OVERALL UTILISATION
measure. It would of course be a great
The metric Overall Utilisation shows the
feature if the OEE could tell exactly what
unit’s time distribution as the percentage of
happened in the process, but that is not the
time the unit is used for primary production.
character of the measure. This gap can be
This is the metric which represents the time
partially filled by using a systematic
usage in OEE. However, it will not show
analysis method developed by the Master’s
the percentage of time the unit has been
thesis writers, discussed under section 6.3
available for production, merely the time it
OPT Method.
has been utilised. By looking at the Overall
Utilisation one cannot tell whether the unit
The fact that OEE already existed as an
has been utilised all the available time or if
internal standard has only been beneficial
there is more available time to utilise. That
for the project since it has created a
is, one cannot tell if the available time has
smoother introduction of the OPT and its
to be increased in order to increase the
parameters. However, the OEE methods
utilisation or if the utilisation can be
have not yet been fully implemented in the
increased without increasing the availability
organisation and the OPT can act as a
of the unit. To be able to determine this,
facilitator in the full implementation of
OPT presents both Availability and Overall
OEE. In this way, OPT and the
Utilisation (see section 6.1.2) for all
organisational OEE implementation can
possible units. Displaying of both the
interact to create an OEE proficient
Overall Utilisation and the Availability
organisation.
facilitates the understanding of the
6.1.1 OEE CALCULATION distribution of the equipment’s total time.
To clarify the relation between Overall
A unit’s OEE can be determined by Utilisation and Availability, the Master’s
multiplying its Overall Utilisation, thesis writers defined a metric referred to
Performance and Quality (see as Utilised uptime (see section 6.1.3), which
Equation 24). The OEE is based on these is the ratio between Overall Utilisation and
three metrics, each one carrying the same Availability.
OEE Availability x Performance x Quality Equation 24
49 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
Due to lack of required data, it is not assurance. The third metric in OEE
possible to determine Overall Utilisation provides a view of the quality of the
for all units in the process. In those cases, performed work.
Availability is being calculated instead. The
Due to lack of data, the Performance
user has to be aware that these two metrics
metric cannot be determined for all units.
differ and shall not be compared. The
This is the case for all the classifiers,
Availability can, however, be compared
conveyors and feeders at MNC. From the
between similar units since it is being
available data one cannot tell the rate at
calculated for all units included in the
which the units have been performing;
project.
hence, the Performance cannot be
determined. For these units, the OEE will
PERFORMANCE
consist of Overall Utilisation and/or
The Performance of a unit has two Availability. This is acceptable since the
components, Target Production Rate and concerned units are not primary
Actual Production Rate, and is computed contributors to the main task of the
as the ratio between the two. This means production process – to comminute ore.
that the Performance is as affected by the Their main function can be regarded as
Actual rate as the Target rate. The Actual supportive, therefore their main concern is
rate is determined by data extracted from to be available to perform their dedicated
the PI-database and is only dependant on task.
the performance of the process unit. The
Target rate is set by the organisation QUALITY
following certain guidelines (see section
The new definition of Quality (see
5.1.6 Setting Filters & Target Values). This
Equation 25) combined with the method of
in turn means that a rate set by the
how to, in practice, determine quality in a
organisation has a huge part in deciding the
single stream comminution process has not
Performance rate of a unit. Therefore, the
been seen before. The development of a
setting of the Target Production Rate needs
quality metric makes the OEE calculation
to be done very carefully, otherwise
complete and provides a more accurate
Performance can turn out to be a
OEE value than previously when the
misleading metric.
quality most often was assumed to be 100%
It should be noted that the Performance in a process such as this one. It is a well-
can result in a ratio greater than 100%. working method, however it could be
This will occur when the Actual Rate refined. It does not take into account the
exceeds the Target Rate, which obviously magnitude of the deviations below target
happens when the Target Rate is defined at size but assumes all sizes below target to
a too low value. In such a case the Target have a quality of 100%. The method could
Rate shall be reviewed and possibly be refined to take those variations into
adjusted. account, which would result in a more
accurate quality measure. At MNC, there
The Performance metric shows how
was no need for lower particle size limits
efficiently the unit is working but will not
since all particles smaller than target size
show if the right things have been done,
were accepted.
which is defined as effectiveness. The
effectiveness has to be ensured by other The method could also allow a certain span
organisational processes, such as quality of sizes around the target size, if the unit
50 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
target size will always be required. In this the quality measured at 406-CV-007 is the
case some of the target sizes for the process quality of the product performed by the
were already determined, others were not. whole 406 circuit, not for any single piece
At those points in the process where actual of equipment. The same applies for the
size is possible to determine, but the target other process areas.
size is undefined, a method should to be
developed to determine the target size.
6.1.2 AVAILABILITY
As previously mentioned, the Quality The Overall Utilisation is used as the main
measure will not take the magnitude of the measure of time usage for a unit in the
deviation below the target size into calculation model. However, if the Overall
consideration. It will only take into account Utilisation equation for some reason is not
the percentage deviation above the target. applicable (e.g. lack of data), Availability is
This presentation of the number is chosen used. For units which lack the sufficient
because all deviations below are regarded data one cannot tell whether the unit is
as positive. However, it is understood that performing primary production or not.
the actual deviation is an important factor Therefore, the metric Availability is being
to consider. Therefore, in addition to used in OPT for some classifiers, conveyors
displaying the quality, the tool displays the and feeders instead of Overall Utilisation.
actual size deviation in millimetres
Availability shows the ratio of time that the
including the sign of the value, plus or
unit is available for production, i.e. the time
minus. Based on those two values the user
it is not standing still and therefore has the
of the tool can conclude how severe the
possibility to contribute to the production
deviation is. For instance, a negative
process. However, the metric cannot
deviation might be acceptable up to a
provide information on the productivity of
certain limit, whilst all positive deviations
the equipment, which is presented through
might be unacceptable. This is a decision
the metric Performance.
point in the tool where the user’s expertise
has to be utilised (see section 5.3.4 OPT
The Availability is also used as an
Users).
additional parameter for the crushing unit
in OPT. Displaying both the Overall
Further, this method calculates the quality
Utilisation and the Availability facilitates
at given points in the process. For instance,
the understanding of the distribution of the
for area 406, the quality is measured at the
equipment’s total time, since the two
conveyor named 406-CV-007, which is the
metrics use different parameters in their
conveyor belt between the secondary
respective equations. What has to be
screens and the mill feed silo. The actual
considered is that in most instances the
size at that point is the result of the entire
metric Availability will give a higher value
406 circuit working together. Obviously,
than (or equal to) Overall Utilisation since
the HPGR crusher has alone reduced the
the Uptime, which Availability is based on,
size of the particles, which is the main task
is higher than (or equal to) the Primary
of the circuit. Still, all other equipment has
Production, which Overall Utilisation is
to be in place and functional in order to
based on. This is because Uptime is more
bring the material through the circuit. The
inclusive (also including for example idling
screens have to split the material accurately,
time) than Primary Production (see Figure
the feeders have to feed, the conveyors
10). This is important to regard when
have to transport and so forth. Therefore,
comparing the two metrics. It is always
52 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
highly recommended to analyse the condition, which often leads to high capital
parameters included in a metric before investments. During the Master’s thesis
comparing between metrics. writers time on site, the Utilised Uptime
was frequently calculated and analysed and
Availability is often used as an indication
one important conclusion that could be
of how well maintenance work is carried
drawn was that Availability is not a critical
out on an asset, i.e. how much of the total
problem at MNC. The Utilised Uptime is
time the asset is available for production.
most often low and primary focus should
However, efficiency of maintenance work is
therefore be put on increasing the amount
not the only factor affecting the available
of primary production, hence the Overall
equipment time. At MNC, and in almost
Utilisation.
every single stream process, interlocks are
used to control the process units’ relative The Utilised Uptime metric can be used to
behaviour. This can result in, for instance, read out various information about a unit.
an upstream unit standing still due to a A low Utilised Uptime indicates a low
breakdown downstream. For this reason utilisation of the time the unit actually has
the analysis of available time has to take been available for production. This shows a
into account the reason for the downtime, possibility to increase the production of the
which can sometimes be out of the control unit by only increasing the utilised time,
area of that certain unit. without increasing the available time of the
unit or reducing the unit downtime. In fact,
6.1.3 UTILISED UPTIME
if the availability increases and the
production time is constant, the Utilised
To clarify the relation between Overall
Uptime will decrease. A Utilised Uptime of
Utilisation and Availability, the Master’s
100% indicates that all available time has
thesis writers came up with a metric
been utilised for production. This means
referred to as Utilised Uptime, which is the
that both the available time and the utilised
ratio between Overall Utilisation and
time have to be increased in order to
Availability (see Equation 26). This ratio
increase production.
shows the percentage of the uptime that is
used for primary production. The metric 6.1.4 MTBF & MTTR
Utilised Uptime will highlight the
difference between available time and The two metrics Mean Time Between
utilised time, which is an unutilised time Failures (MTBF) and Mean Time To
share and therefore an area of possible Repair (MTTR) are two useful measures
improvement. for indicating asset reliability and the
quality of the maintenance work. Since
Plants within the mining industry are often
they both represent a mean time, the
battling with trying to increase their
metrics will give the average time between
equipment availability by improving the
the failures and the average time to get the
asset reliability and the maintenance
asset back in working condition. But none
quality. This is often done through the
of the metrics will give the distribution of
updating and changing of parts more
the failures or the time consumed to repair
frequently than required by the equipment
the unit. This is critical information for the
Overall Utilisation
Utilised Uptime Equation 26
Availability
53 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
site maintenance team in order to be able
to improve their asset reliability as well as
their routines. That is why MTBF and
MTTR should be used as indicators
trended over time together with a
systematic analysis of the metrics, as
discussed in section 6.3. Continuous logging
of downtime, stop location, cause of
downtime, etc. is important, not only to
make it possible to calculate MTBF and
MTTR but to also facilitate the analysis. Figure 30 – The relationship between
This type of logging has been automated by frequency and total downtime for
unplanned stops in January 2012 at MNC
the Master’s thesis writers at MNC. Further
details of the stop reporting procedure can
be found in section 5.5 and 6.5.
The Pain concept has been very well
accepted, both on site and at the head
The over-time trending of the metrics
office. The users highly appreciate the
should be used when comparing the current
possibility to view frequency of error and
status with previous results to understand if
downtime in one metric and one single
actions taken are improving the asset
diagram. The Pain concept as well as other
reliability and the quality of maintenance.
parts of OPT will be implemented in newly
The longer the time span reviewed, the
developed software to be introduced
more accurate the metrics will be. It is
company-wide. This can therefore be
therefore preferable to analyse a time span
regarded as one of the major achievements
of 30 days rather than 7 days.
in the project.
6.1.6 PAIN ANALYSIS
When introducing the new concept, Pain, it
is important to clarify how the metric is
The Pain analysis has been developed by
computed so that no confusion arises. Most
the Master’s thesis students. It provides the
importantly, the Pain does not represent
user with an understanding of the
the total downtime in any sense but the
downtime situation and its distribution
product of frequency (n) and sum of
between total downtime and frequency or
downtimes, which makes Pain n times
error. The usage of the Pain analysis saves
greater than the total downtime. To not
the user the often complex task of
confuse the user, Pain is presented as a
combining frequency and downtime from
unit-less metric.
two separate graphs to find the most critical
error in the process.
The input to the Pain analysis is extracted
from downtime reports which are
It was discovered that frequency and total
performed only on crushing units; hence
downtime of an error often do not
the Pain analysis is limited to those units. If
correspond (see Figure 30). Sometimes the
a downtime reporting procedure would be
two parameters, rather, are inverted, i.e.
in place for any other unit, a Pain analysis
when downtime is high, frequency is low
would be possible to perform for that unit.
and vice versa.
Since the input to the analysis is drawn
from downtime reports created by
54 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
employees on site, the reporting has to be be done if one of the components is found
done properly. It is crucial that the to be of more importance.
reporting employee knows the process and
what to report, i.e. the root-cause to the
6.1.7 SETTING TARGET VALUES
downtime and not the consequence of it.
To enable some of the metrics to be
The human involvement will create a
calculated, target values need to be
possibility of human errors in this otherwise
determined. This is the case when
highly automatic system. It has to be
calculating Overall Utilisation. It has to be
considered that errors can occur. To
defined when the unit is performing
minimise errors in the reporting, the
primary production. For instance, a crusher
reporting employees shall be well educated.
might be defined to perform primary
To ensure this, a workshop was held with
production when its power exceeds 140 kW.
the concerned parts on site.
For other units, the limit can be defined as
The internal document, Anglo American a speed or a weight etc. This concludes that
Equipment Performance Metrics, not only the defined limit to a high degree decides
includes OEE definitions but also the calculated equipment performance.
downtime categorisation used to allocate Therefore it is highly important that
downtimes and facilitate tracking and accurate limits are defined. If so, the result
comparison between company sites. This will be truthful.
categorisation model is included in OPT
It is complex to set target values, especially
and in that way completely aligned with the
as parameters are ever changing. According
company standard. The former reporting
to the Master’s thesis writers’ proposal in
system was not aligned with the company
section 5.1.6, it would be a good idea to set
standard and its downtime categories. The
targets based on equipment changes. It is
tracking was therefore not possible and has
understood that small process equipment
been made possible through the Pain
changes are being performed frequently
analysis and OPT.
and that targets cannot be changed as
When comparing Pain values between units, frequently, therefore, the suggestion is to
one should be cautious and not compare review targets periodically. The user needs
different time spans since the values most to find an appropriate interval to review
often are higher for a longer time span. the different targets, since it might not be
Also, caution has to be taken when suitable to review all targets simultaneously.
determining an acceptable level for an
When targets are modified, the results will
error. For instance, the downtime named
also change. For instance, if a unit is
“Shift change” might always have the same,
observed to always have high Performance,
relatively high, level due to a predefined
it performs close to its targeted rate, and a
time dedicated for shift change and might
setting change is done to make it perform
therefore not need as much attention. Of
even better, the Performance value will
course, the aim shall always be to decrease
most probably change. The user needs to
downtimes, but one should be aware that
be aware of this when changing targets and
certain downtimes are more critical than
analysing data. This is particularly
others.
important when data prior to and after a
The Pain concept can be refined and target change is compared, since the results
developed by giving either component a can change drastically when a target is
factor to put more emphasis on it. This can changed.
55 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
In the end, it is highly recommended not to parameters for all units, but keeping the
compare OEE values and the values of its measures to a minimum reduces the risk for
included components between units and information overflow as well as makes it
sites. If such comparisons are not being easier for the user to read the output from
made, the exact numbers are not as OPT. Full measures for all types of
important as the relative numbers for a equipment would demand an investment as
single unit, which are of much more well since all equipment at MNC is not
importance and interest. The handicap of a prepared with measuring equipment. The
golfer can be used as an analogy when it Master’s thesis writers suggest that a proper
comes to only competing and comparing evaluation should be performed to find out
results individually. Still, the ambition if there are any missing measure points
should always be to set as accurate target before any investments are carried out.
values as possible.
6.1.8 EQUIPMENT CLASSIFICATION
There are four different classifications of
the equipment in the calculation model.
The reasons for having different
classifications of the equipment are two.
Firstly, all equipment is not equally
complex and does not require the same
detailed monitoring. Secondly, all
equipment does not have the same
technical set-up and possibilities to
measure all parameters. However, there
will always be a demand for measuring all
Table 11 – Classification of equipment
Classification A B C D
Equipment Circuits Comminution Supporting Supporting
units equipment equipment
(with available (without
measures) available
measures)
Details OEE OEE Overall Availability
Overall Overall Utilisation
Utilisation Utilisation Availability
Performance Performance Utilised
Quality Availability Uptime
Availability Utilised Uptime
Utilised Uptime MTBF
MTTR
Pain
56 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
6.2 THE OVERALL users so that they could start using OPT
immediately. The tool has received very
PRODUCTIVITY TOOL
good feedback from the users at MNC as
(OPT) well as the senior team at the Head Office
in Johannesburg. The plan is for MNC to
During the early
use the OPT prototype and provide
parts of the 2 feedback to the process control team in
Develop a
project, the
Johannesburg, which is currently working
Master’s thesis tool on a new software platform that will use
writers
some parts of OPT. The fact that Anglo
developed a
Platinum will use parts of the project
small scale OPT
Figure 31 – Project proves that the outcome is practically
prototype to test phase two - develop useful. Hopefully, this project will fill an
the calculation
existing gap in the productivity
model with production data from MNC.
improvement work within the organisation.
The idea of the OPT prototype was to learn
as much as possible about the The development of a suitable way to
characteristics of the process and test the present the OPT output has been a long
calculation model as well as the coding of iteration process. It was a balancing act to
the software. It was beneficial to run the keep it simple and clean while still
calculation model at an early stage in the providing the user with enough, and the
project since it gave the possibility to refine right, information to enable the user to
it and get feedback from the process reality. perform an analysis and make accurate and
The learning curve was steep for the valuable conclusions. There is an infinite
process knowledge but even steeper for the amount of information that could be
art of coding. Since the two Master’s thesis presented in OPT, but the Master’s thesis
writers are Mechanical engineering writers have been very selective in the
students and not Software engineering decision on what to present and what to
students. Throughout the project, the OPT leave out. The information in OPT is
prototype was constantly under presented on two different ways - overview
development, where module after module graphs and detailed data. The overview
was tested and added to the code. This gave graphs are to be used to get a quick
a thorough understanding of the dynamics overview over the current status, while the
of the code. The coding could definitely detailed data can be used for more detailed
have been done differently if it had been systematic analysis. This applies to all the
done by professionals from the beginning. metrics in OPT, i.e. OEE, Availability,
Utilised Uptime, MTBF & MTTR as well
The overall concept is well aligned with the
as Pain analysis. The presentation of data in
company standard of metric definitions,
OPT is consistent, which is important since
which helps to lower the learning curve for
it speeds up the user learning curve as well
the user of the tool. OPT was developed
as facilitates the analysis of large amounts
with a product development approach and
of data.
hence customised for the end users and
their requirements. OPT is built in Visual Basic Editor and the
user interface is Microsoft Excel. There are
At the end of the project the final OPT
several benefits from this. OPT extracts
prototype, as well as the OPT guidelines
data from the process database PI and
and manual, were handed over to the end
57 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
performs calculations according to the The Five Why’s is an internationally
calculation model and presents the results recognized systematic problem solving
in Microsoft Excel automatically. Microsoft methodology that has a proven record of
Excel is a very common software, which finding root causes. The Five Why’s is
means that most of the users already are already implemented in the organization as
familiar with the interface and are capable the main problem solving methodology. It
of using OPT. It will be easy for the more has therefore been incorporated in the
advanced users to make changes and OPT method.
amendments to the code, but this has been
To further facilitate ease of use for the
restricted to only certain users to avoid
users of OPT, the Master’s thesis writers
mistakes and corruption of data. For MNC,
have developed structured guidelines to
the use of OPT will not incorporate any
follow when working with OPT. The
investments since Microsoft Excel is
guidelines consist of three parts; the OPT
already a part of the company software
Manual, the OPT Meeting procedure and
package.
the OPT Action list. The OPT Manual is a
The dry section at MNC consists of five complete guide on how to use the tool with
production areas, 102, 401, 405, 406 and 407. examples of how to interpret various results.
All these areas are covered by OPT to get a The OPT Meeting procedure proposes a
comprehensive view of the dry sections structured way of holding a meeting
productivity. OPT can be extended to cover focused on OPT and its outcome. The OPT
all process areas at MNC to get an Action list is a document to capture and
aggregate view. keep track of actions that have evolved
from analysing the OPT output.
6.3 OPT METHOD The OPT Manual will most likely be used
during the introduction period of OPT. The
The OPT method
manual is a good guide for someone who
is based on three 3 has not previously worked with OPT and
Design a
parts, User therefore is not familiar with all the metrics.
expertise, the 5 method The manual should also be used whenever
Why’s and the a new problem is detected since it
OPT guidelines. addresses different ways to interpret OPT
They found the Figure 32 – Project output. However, OPT is developed to be
phase three - design
basis for how OPT so user-friendly and intuitive that no
should be utilised to gain as much valuable manual is necessary, so the intention is that
output from it as possible. the manual should not be needed
constantly.
The OPT users’ background knowledge of
the process is the key to understanding the The meeting procedure document was
information presented in OPT. It has been developed to create a focused meeting with
assured that the intended users of OPT at the aim to analyse and find root causes to
MNC have the required knowledge. If this problems as well as follow up on issued
requirement is not met by the user, the actions. The predefined procedure will
result will most certainly not be as hopefully guide the meeting participants
satisfying as is could be. through the meeting and help to keep the
meeting productive and not too time
consuming.
58 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
The main reason for using an action list as a OPT belong to the engineering and
supportive technique when working with technical teams. This is considered to be a
the tool is that the actions shall be successful combination of users since skills
documented and it should be stated who is from different departments are important
responsible for what. The emphasis should for getting everyone focused on the most
be put on analysis of outcome and follow- critical problems. The cross functional
up of actions taken, since those two areas collaboration around the tool will hopefully
tend to sometimes be neglected at MNC. help to increase the general cross functional
collaboration in the organisation. It has
When OPT and the OPT Guidelines were
been observed by the Master’s thesis
handed over to the organisation, the
writers that an increased cross functional
manual incorporated in the guidelines was
collaboration is possible and is therefore
highly appreciated by the organisation,
advisable. An improved cross functional
since the Master’s thesis writers were
collaboration will create a common focus in
leaving the site upon project finalisation.
the organisation and help the employees to
The users now have the possibility to
reach their goals and at the same time
further train themselves in using OPT as
reduce the risk for dual work.
well as to train new users. It will also work
as a support if something with OPT is not What has not been done is a proper test
working properly. and evaluation period, similar to what was
carried out for the calculation model and
There will always be a need to analyse the
tool. This is currently carried out by the
OPT outcome since merely reading the
organisation itself and the end users of
numbers cannot provide any complete
OPT. A proposal for an evaluation project
answer. The goal with the OPT analysis is
by the Master’s thesis writers is under
to identify and eliminate root-causes to
development.
encountered problems that affect the
productivity. The analysis of metrics
displayed in OPT are suggested to be 6.4 OEE FOR A GENERAL
carried out in two major ways and can be
SINGLE STREAM PROCESS
summarised as follow:
During the pre-study, the Master’s thesis
1. From metrics to process. If there is
writers developed a general model for
noticeable change in metric values,
calculating OEE in a single stream process.
find reasons in the process.
The aim was to keep the model separated
2. From process to metrics. If certain from a specific site or company. The
changes are being performed in the process to develop the model gave good
process, investigate if the metric knowledge in the subject, which helped
values are changing. later during the development of the final
calculation model customised for MNC.
The end users were identified during the
The general model has not been used in
time at MNC and the reason for using them
OPT because the organisation standards
is their good process knowledge as well as
had to be considered. However, the Quality
their cross functional positions where they
calculation developed by the Master’s
can exchange valuable information
thesis writers is used both in OPT and in
between their respective departments. The
the general calculation model since there
employees chosen to be the main users of
was a gap in the definitions created by the
59 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
organisation, which prevented the use of 6.6 RESEARCH QUESTIONS
the proposed Quality definition.
AND ANSWERS
The model suitable for a general single
This section will present the answers to the
stream process was tested on historical data
Research Questions.
and was found to be working very well. It
would be interesting to test it in another
1. How can a method be developed to
single stream process, for example in a
define and rank process units critical to
different industry such as the paper and
productivity in a comminution process?
pulp industry.
Firstly, the process needs to be
completely understood by the person
6.5 CRUSHER AND MILL STOP developing the method. Both inter-
REPORTING PROCEDURE process relations and individual unit
functions have to be mapped and
The new automatic stop reporting comprehended. This should be done
procedure developed by the Master’s thesis in order to identify critical parts of
writers has created a way for the the process.
downtimes to be allocated and categorised
Secondly, there is a need for a
according to the Anglo American
thorough understanding of the
Equipment Performance Metrics downtime
organisation running the operations.
categories. This facilitates tracking and
comparison of downtimes between
Thirdly, a measure of productivity
company sites, which is important in large
has to be defined in order to be able
businesses.
to evaluate the productivity of the
process units. OEE (Overall
Beyond the company-wide standardisation
Equipment Effectiveness) is such a
benefits, it also facilitates the analysis of
measure. It gives an inclusive view of
downtimes and errors on site since the
the value added by the unit since it
reporting is being performed daily, instead
includes three measures (availability,
of once a month as before. This shortening
performance and quality).
of lead-time has resulted in a process where
downtimes can be investigated very soon
Fourthly, which units to include in
after their occurrence which helps minimise
the ranking need to be defined. The
their negative impact on the process.
selection of units can be done based
on the knowledge assimilated in the
Previously, the downtime table was created
previous steps.
by manually entering downtime
information and manually categorising the
Fifthly, based on the understanding
downtimes. In the new procedure, a script
of the operations, critical process
draws data from the PI-database and
parameters have to be determined.
organises it into a downtime table. The new
Every single unit within a
downtime reporting procedure can be
comminution process has certain
argued to be more robust since it has
parameters to address when looking
eliminated several manual steps.
at productivity. Among those
parameters, some are more critical to
productivity than others. These have
60 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
to be identified and will be used The Performance will be maximised
further on in the ranking. when the unit is running better than,
or as close to its target rate, as
Sixthly, a rating based on the critical
possible. To achieve this the unit has
parameters should be developed. The
to receive a satisfactory and
rating has to take in to account the
continuous feed, run with optimal
different criticality of the parameters.
settings and be in good condition.
For instance, safety shall have the
highest criticality among the The Quality will be maximised when
parameters. the unit is producing the right particle
size, i.e. minimising the deviation
To keep the method aligned with the
from target size. The actual particle
current operations the parameters
size will be dependent upon the
within it need to be periodically
quality of the feed, the settings of the
reevaluated.
unit and the condition of the unit.
2. How should OEE numbers be calculated
4. How can OEE be used as a performance
in a comminution process?
measure of equipment and process
performance?
The traditional OEE calculation was
developed for the manufacturing
Since OEE includes three measures,
industry and is therefore not suitable
i.e. availability, performance and
for a comminution process. Several
quality, it is a comprehensive
changes have to be made to suit a
performance measure in comparison
comminution process. The
to single-parameter measures.
calculation model developed to suit
this particular process is presented in It is highly important that the target
section 5.4 OEE for a General Single KPI of each unit is established based
Stream Process. Given that the on the conditions of that particular
required data is available, this unit and that the targets are being
method should be suitable in a reviewed on a regular basis. It is
general case. The major difference important to note that the OEE of a
from the general OEE calculation is unit shall not be compared to OEE’s
the new way to define Quality, which of other units, but only to itself. This
is customised for a comminution is crucial since the conditions and
process. target definitions between units may
differ.
3. Which factors in the process chain are
more critical to productivity – according 5. How can a high OEE help to improve
to the OEE method? SHE (Safety, Hygiene, Environment)?
To achieve a high OEE, the included High OEE measures imply a well
metrics must all be high. Overall running plant. This facilitates the
Utilisation will be maximised when planning of scheduled stops and most
the unit has a high running ratio, i.e. definitely results in fewer
few stops. This will be facilitated by breakdowns. A process with few
the good condition of the unit, which unscheduled stops, i.e. a large
can be assured through high quality proportion of scheduled stops, is a
maintenance. safer process than a process with a
61 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
large amount of unscheduled stops. the procedure followed in problem solving
This is the case since scheduled processes cannot be refined or improved
maintenance gives the opportunity to when evaluated because there is no proper
plan the maintenance actions and documentation. In addition to this, it is
creates better conditions for the difficult to train new people because there
performance of safe operations. is no database with information on the
Hence, a high OEE creates problems encountered in the plant and how
opportunities to improve SHE. they were resolved in the process. For
instance, the 5 WHYs method is frequently
6. How can measuring OEE help to
mentioned as the correct method to follow,
improve productivity?
however, no documentation has been
presented on how it had been used to
The measuring will not improve
resolve problems on the plant. From the
productivity directly but measuring
outsider- it does not seem to be used to the
individual OEE’s of the units in the
same extent as planned.
process will help to identify where in
the process bottlenecks exist and will
Clear problem solving procedures should
therefore highlight possibilities for
be developed and communicated to the
improvement. A successful
employees who are intended to master and
elimination of the identified
apply the methods. In cases where
bottlenecks will result in an
education is required to use the methods, a
improvement in OEE and can
concerted effort should be applied to
consequentially give a productivity
provide it. The existing problem solving
improvement.
method (the 5 WHYs) is a suitable method
which can help to eliminate root causes. A
proper follow-up of the usage of the
6.7 OBSERVATIONS
communicated method should be done.
One of the reasons for spending a
FOLLOW-UP
considerable period of time on site was for
the researchers to observe the day-to-day
The plant has done well in following up on
activities and gain a greater understanding
most of the issues that have arisen. It is
of the operations. Various observations
commendable for a plant with such a large
have been made during the time spent on
capacity to carry out most of the follow-up
site. Only those that were deemed
tasks as they do. However, the
important for plant productivity are
documentation and formal report back on
reported here.
the actual effects of performed process
PROBLEM SOLVING changes targeted for follow-up on tasks that
need a review is not stringent. This leads to
Observations have been made regarding failure to attend to some of the cases
the problem solving procedures in the targeted for follow-up. If the records and
operations. Although many tasks in the report backs do not capture the follow-up
operations involve problem solving, there information and the effects of the change,
seem to be no defined structure and the plant would take it for granted that
documentation of procedures used in follow-ups are done continuously even
recurring tasks. Granted, while most of the though some of the key matters are not
employees have many years of experience, getting any attention. This can be the case
62 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
for some major process changes such as of efforts because the divisions will have a
changing the liner in a crusher or adjusting better insight into the activities taking place
the crusher gap settings. in other divisions at the same time. Further,
it will work as a learning opportunity for
The responsibility for follow up should be
the persons not directly involved but well
shared throughout the entire organisation.
informed. In that way they can gain a
When a particular recommendation for
greater understanding of the work of other
process changes is made, a person should
divisions.
be assigned to implement proper
evaluation and follow-up. This will make it Although a lot has been done to promote
easy for all involved to understand the communication and cooperation there is
effects of process change still a divide between divisions. There is still
a divide in reporting structures and
Follow-ups are not only important after
development of tasks which leads to
major process changes, but also for regular
duplicated efforts. The communication and
tasks assigned to people. These can be
cooperation between divisions should be
listed as action items for weekly meetings
enhanced and the duplicated efforts should
and the tick box approach can be taken at
be eliminated. This can be done by, for
such meetings. This should be done to
instance, holding common meetings
capture a record which may be very useful
involving only the people relevant for the
in providing insight on jobs that take a long
discussion. It is recommended to keep the
time and the reasons for such delays which
meetings action oriented and focused on
can feed directly into planning meetings.
the dedicated subject in order to optimise
Such a record can also provide information
the numbers of persons involved and to
on problematic areas of the plant which
minimise time spent on meetings.
may require more resources with time. This
can result in a better understanding of JOB CARDS
recurring tasks as well as a learning
opportunity for the other meeting The current system for maintenance relies
participants. on the SAP to generate job cards which in
most cases works well. However, some
INTER-DIVISIONAL
areas in the process do not have access to
COMMUNICATION & COOPERATION job cards because some tasks lack a
dedicated functional code in the system.
Throughout the organisation there is a The problem that arises from this is that
common drive to produce concentrate as some jobs are performed without job cards
effectively as possible and to maintain the and therefore cannot be easily tracked.
plant in a good operational state. This is Further, all existing job cards should be
clearly visible even for external observers continuously reviewed to keep them
like the researchers who prepared this aligned with the continuously changing
report. However, the plant is fairly large process.
and it takes a long time for information to
reach all the relevant people in various
GOVERNANCE
sections of the operations. There is an
MNC has a defined structure in terms of
opportunity to implement information
the sections of engineering,
structures that can help visualise
technical/metallurgy and production which
information between divisions of the
is commendable. It is also evident that
operation. This will also reduce duplication
63 |
Chalmers University of Technology | CHAPTER 6 – Discussion & Conclusion
when repair work is required, teams from The outputs from the tool can help create
all sections are involved, which reinforces standards which can be formulated and
the team spirit that is present at the implemented in the organisation. Follow-
concentrator. However, it has been ups on matters arising from using the tool
observed that due to the integration can be structured and implemented with
between those teams, some areas of buy-in from all divisions.
responsibility for certain categories of
employees seem to be undefined. In a case
when a task does not require handling in a
6.8 RECOMMENDATIONS
routine manner, it can easily fall under no
Recommendations for the outcome of this
one’s area of responsibility and this can
project mainly concern the usage and
create a problem. A good example of this
future development of the Overall
would be equipment failure due to an
Productivity Tool (OPT). The development
unidentified problem. In this case it is
of OPT should continue before the new
better for the maintenance division to focus
platform is completed, it is highly
on the required repair work, while
recommended that the current users of
production teams continue with production
OPT continue to provide feedback on how
tasks. This will minimise the impact of the
the tool is used and how it can be improved
repair work on production and will also
by suggestions for improvements can be
ensure that the responsibilities for various
incorporated in the follow up version. The
tasks are streamlined. It will also assist in
users are encouraged to thoroughly test the
eliminating duplication of efforts on the
different methods suggested by the
same task. This mode of operation can only
Master’s thesis writers since these have not
be achieved if all the sections have full staff
been fully evaluated on a production plant.
complements and all the teams are skilled
Another important aspect of the methods is
in specific tasks. It also requires a common
that they should be tested by several
decision making platform and approach.
different users and not only the main users
to provide information on how user
UTILISING OPT IN THE
friendly the tool is. It is crucial to do this in
ORGANISATION
order to receive feedback from experienced,
as well as new, users before further
In addition to the suggested general
development options proposed.
improvements, a new weekly meeting
should be initiated; this should involve key
people from all divisions. The cross-
6.9 FUTURE RESEARCH
divisional meeting participants should use
the tool developed in this Master’s thesis to The Master’s thesis writers have found
create a continuous improvement forum. many interesting areas of research and
This will allow plant personnel more would like to propose a few subjects for
opportunities to communicate and resolve future research.
plant communication problems seamlessly.
‐ Evaluate and develop the methods
Having cross-divisional participants in the
suggested in phase three of this project
meetings when applying the tool will
since this has not been done in the
provide a good platform for tracking and
project due to the limited available time.
learning from actions taken which will lead
to an increase in productivity.
64 |
Chalmers University of Technology | OPT Manual
Developed by: Anton Kullh & Josefine Älmegran, 2012
3. To update the OEE calculations for the previous month, click the grey button named “Previous
Month” and wait until a dialogue box opens and confirms the update.
4. To update the Pain analysis for the previous month, go to the sheet named “Pain” and click the
grey button named “Previous Month” and wait until a dialogue box opens and confirms the
update.
Note that the Pain analysis for the previous month has to be updated before updating the
previous 7 days in order to display the stop table for the previous 7 days under OEE Table sheet.
5. To update the Pain analysis for the previous seven days, go to the sheet named “Pain” and
click the grey button named “7 days” and wait until a dialogue box opens and confirms the
update.
6. OPT is now updated according to the dates displayed on top of each sheet under “Start” and
“End”.
Note that the document shall be saved before closing down.
OEE CALCULATIONS
Two sheets in the Overall Productivity Tool (OPT) concern OEE calculations; those are named
“OEE” and “OEE Table”.
OEE SHEET
The OEE sheet displays charts with the OEE for the circuit and the crushing unit in the
monitored area. It also displays charts with the Overall Utilisation, Performance, Quality,
Availability, and Utilised Uptime for all units included in the monitored area.
The charts are sorted in ascending order to visualise what units that currently have the lowest
Overall Utilisation, Availability and Utilised Uptime. The abbreviations of the units are explained
in the OEE Table sheet.
To the right of the charts boxes with explanations of the charts and their metrics are provided.
At top of the sheet, the monitored time intervals are displayed. For information on how to
update these and the tables to the current end time, see section How to Update Calculations.
OEE TABLE SHEET
The OEE Table sheet provides the calculations a transparency and can be used to get a more
thorough understanding of the charts displayed in the OEE sheet.
The OEE Table sheet displays the components of the OEE calculations categorised by unit type
(Crusher, Classifiers, Feeders, and Conveyors).
VI |
Chalmers University of Technology | OPT Manual
Developed by: Anton Kullh & Josefine Älmegran, 2012
The definition of any displayed component can be viewed by hovering over the component name
(see Figure 3, where the pointer is hovered over “Availability”).
Figure 3 – Information appearing when hovering over Availability
All displayed rates (except for Utilised Uptime) have a colour code which visualises the current
status. Green is for Satisfactory, yellow for Poor and red for Alarming.
Satisfactory Poor Alarming
The colour limits for the different units can be seen at the bottom of the sheet. The limits shall be
set based on the business targets of those values and should only be changed by the
administrator of OPT.
The cells with a grey background colour are target values which shall be changed if process
changes resulting in target changes are performed. The target values should be changed only by
the administrator of OPT.
All set targets shall be reviewed if the process has been changed in such way that the current
target parameters are invalid.
Parameters to be reviewed:
‐ Target rates (tph)
‐ Target particle size
‐ Running definition limits for units
‐ Primary production limits for units
VII |
Chalmers University of Technology | OPT Manual
Developed by: Anton Kullh & Josefine Älmegran, 2012
At top of the sheet, the monitored time intervals are displayed. For information on how to
update these and the tables to the current end time, see section How to Update Calculations.
PAIN ANALYSIS
Two sheets in the Overall Productivity Tool (OPT)concern the Pain analysis; those are named
“Pain” and “Stop Table”.
PAIN SHEET
Pain is a way of visualising the combination of frequency and downtime of stops occurred in the
monitored area. The Pain sheet displays charts with the top 6 Pains in the monitored area.
The charts are sorted in descending order to visualise what stop reasons that currently are
causing the largest Pain. The stop reasons are labelled below each bar in the chart. The unit for
the y-axis is thousand minutes, however, pain is displayed as a unitless metric.
At the top of the sheet, above the charts, Mean Time Between Failures (MTBF) and Mean Time
To Repair (MTTR) are displayed for the units.
At the top of the sheet, the monitored time intervals are displayed. For information on how to
update these and the tables to the current end time, see section How to Update Calculations.
More detailed stop information, such as stop time, start-up time, duration of stop, comment and
downtime codes, can be found in the Stop Table sheet.
STOP TABLE SHEET
The Stop Table sheet presents detailed stop information drawn from stop reporting though the
PI database. The information that can be viewed is as follows:
‐ Stop time: The time the unit stopped
‐ Start-up time: The time the unit started up after the stop
‐ Duration: The duration of the stop
‐ Stop reason: The reason of the stop
‐ Manually entered comment: Possible manually entered comment by stand-by official
‐ Downtime categories
o Downtime sub-category code
o Downtime sub-category name
o Downtime category code
o Downtime category name
‐ Scheduled/Unscheduled: Indicates if the stop was scheduled or not
The downtime categories are used in order to facilitate the allocation of stops in alignment with
Anglo American Equipment Performance Metrics Time Model.
At the top of the sheet the total numbers of stops and the total downtime in the chosen time
interval are displayed. At the extreme top of the sheet, the monitored time intervals are
displayed. For information on how to update these and the tables to the current end time, see
section How to Update Calculations.
VIII |
Chalmers University of Technology | OPT Manual
Developed by: Anton Kullh & Josefine Älmegran, 2012
DEFINITIONS OF METRICS
The following metrics are used in OPT and has to be understood in order to utilise OPT as
effectively and correctly as possible.
OEE - OVERALL EQUIPMENT EFFECTIVENESS
OEE is a metric that displays how effectively a unit or operation is utilised. OEE is calculated as
the product of Overall Utilisation, Performance and Quality.
OEE = Overall Utilisation x Performance x Quality
OVERALL UTILISATION
The Overall Utilisation is the percentage of the total time that the unit is utilised for primary
production. It is the ultimate performance indicator of how total calendar time is utilised.
Overall Utilisation = Direct Operating Time / Total time
Direct Operating Time (T300): Time the unit is performing primary production activities
Total time (T000): Total time in chosen time interval (24/7)
PERFORMANCE
The Performance is the production rate at which the operation runs as a percentage of its
targeted rate.
Performance = Actual Production Rate / Target Production Rate
Actual Production Rate = Actual Production Achieved / Primary Production
Actual Production Achieved: Actual tonnes produced during chosen time interval
Primary Production (P200): Time equipment is utilised for production.
For time definitions, see Figure 4.
QUALITY
The Quality looks at the P80 particle size and shows to what extent the particles size is below the
targeted size. It compares the Actual Particle Size at a certain point in the process to the Target
Particle Size. The Quality is defined as the mean deviation above Target Size as a percentage of
the Target Size. This implies that all particles below target size results in zero in deviation. To
get the Quality and not the deviation, the ratio is subtracted from 1.
n Deviation from target size
Mean deviation from Target Size n
Quality = 1 = 1 i1
Target Size Target Size
For time definitions, see Figure 4.
AVAILABILITY
IX |
Chalmers University of Technology | OPT Manual
Developed by: Anton Kullh & Josefine Älmegran, 2012
The Availability is the percentage of the total time that the unit is available for production
activities.
Availability = Uptime / Total time
Uptime (T200): Time the unit is available for production activities
Total time (T000): Total time in chosen time interval (24/7)
For time definitions, see Figure 4.
UTILISED UPTIME
The Utilised Uptime is the percentage of the available time that the unit is being utilised for
primary production.
Utilised Uptime = Direct Operating Time / Uptime
Figure 4 – Anglo American Equipment Metrics Time Model
PAIN
Pain is calculated as the product of frequency of the error and total stop time caused by the
error.
Pain = Frequency of error x Total stop time caused by error
MEAN TIME BETWEEN FAILURES (MTBF)
The MTBF is the average elapsed time between failures of the unit.
MTBF = Uptime / Number of stops
Uptime (T200): Amount of time the unit is available for production activities
Number of stops (D000 ): The number of downtime events occurred during the period of
events
time viewed
X |
Chalmers University of Technology | OPT Manual
Developed by: Anton Kullh & Josefine Älmegran, 2012
MEAN TIME TO REPAIR (MTTR)
The MTTR is the average time required to repair the failed unit.
MTTR = Equipment Downtime Time / Number of stops
Equipment Downtime Time (D000): Downtime that renders the equipment inoperable
Number of stops (D000 ): The number of downtime events occurred during the period of
events
time viewed
EXAMPLES OF HOW TO INTERPRET VALUES
The outcome of the Overall Productivity Tool can be analysed and interpreted in several
different ways. This section will explain some fundamentals when analysing the metrics. These
examples might not always be valid but can provide user with an idea of what information that
can be drawn from OPT.
A general recommendation when analysing the outcome of OPT is to use
5 WHYS the method 5 WHYs. The goal is to find the root cause of the problem.
When the root-cause is found, a conclusion of what to do should be
drawn and an action to resolve the problem should be taken. If the
encountered problem is complex, a Ishikawa diagram can be used to find
multiple root causes (see Figure 5).
Figure 5 – Ishikawa (or fishbone) diagram to help find multiple root
causes
OEE is not just a number; it can be up to four numbers – the OEE, Overall
Utilisation, Performance and Quality. It is important to look into all the
factors when analysing an OEE number. If an OEE number found in the
OEE
OEE sheet is found to be of interest, it can be viewed in more detail in the
OEE Table sheet. There, all components of the OEE can be seen and
analysed individually.
XI |
Chalmers University of Technology | OPT Manual
Developed by: Anton Kullh & Josefine Älmegran, 2012
A low OEE indicates a non-effectively utilised unit or circuit. To help
LOW
increase the OEE, the factors included has to be known. The included
OEE
factors can be seen in OEE Table sheet. The components of a factor can be
seen when hovering over it.
A high OEE indicates an effectively utilised unit or circuit. Even though a
HIGH
unit or circuit has a high OEE, it should not be neglected. A well
OEE
performing unit or circuit can provide information about how to run a
unit or circuit effectively. The user should learn from this and apply it on
other units and circuits.
OVERALL The Overall Utilisation shows to what extend the unit has been utilised
for production. It is the ultimate performance indicator of how total
UTILISATION
calendar time is utilized.
A low Overall Utilisation indicates a small proportion of Direct Operating
LOW
Time, which is when the unit is performing production activities. If the
OVERALL
unit has not been utilised, it could be due to internal issues or factors
UTILISATION outside of its boundaries, such as low feed. The Overall Utilisation can be
increased by extending the Direct Operating Time, i.e. the time when the
unit is actually producing.
HIGH A high Overall Utilisation indicates a large Direct Operating Time, which
is when the unit is performing production activities. A unit with high
OVERALL
Overall Utilisation can provide useful information on how this can be
UTILISATION
achieved. The user should learn from this and apply it on other units.
The Performance shows the production rate at which the unit or circuit
runs as a percentage of its targeted rate. This means that the targeted
PERFORMANCE rate has a large influence on the achieved Performance; therefore, it is
highly important that the Target rate is carefully determined. Otherwise,
the Performance measure will be misleading.
A low Performance indicates a Production Rate far below the targeted
rate during the primary production time. This can be due to either a very
LOW
long production time or low production achieved. To increase the
PERFORMANCE
Performance, a higher achieved production has to be reached during the
production time or the same amount of production has to be reached in
shorter time.
A high Performance indicates that the production rate is close to the
HIGH
targeted Production Rate. A unit or circuit with high Performance can
PERFORMANCE provide useful information on how this can be achieved. The user should
learn from this and apply it on other units and circuits.
The Performance can exceed 100%. This will occur when the Production
PERFORMANCE
Rate is greater than the Target Rate. This indicates that the target rate
>100%
has to be reviewed. If process changes have been made in such way that
XII |
Chalmers University of Technology | OPT Manual
Developed by: Anton Kullh & Josefine Älmegran, 2012
the current target parameters are invalid, the Target Rate shall be
adjusted accordingly.
The Quality looks at the P80 particle size and shows to what extent the
particles size is below the targeted size. It looks at the mean particle size
QUALITY
deviation above the targeted size. This implies that all particles below
target size results in zero in deviation, hence 100% in Quality.
A low quality indicates that the mean particle size is far above the Target
Size. The actual particle size and mean deviation are displayed in the
LOW
sheet “OEE Table”. If the Quality is low, the downstream process might be
QUALITY
affected and it should be beneficial to look at the performance of the
downstream units.
HIGH
A high Quality indicates a mean particle size below the Target Size. If all
QUALITY
particle sizes are below the targeted size, the Quality will be 100%. The
actual particle size and mean deviation are displayed in the sheet “OEE
Table”.
The Availability shows to what extend the unit has been available for
production. It does not have to be used during that time; however, it has
to be available. Availability has a strong connection to Overall Utilisation.
AVAILABILITY Those two metrics are complementary since they present the unit
running time in two different aspects. The Availability is in most cases a
larger number since it is including a broader span of time, i.e. all the time
the unit has been switched on, whereas the Overall Utilisation only
includes the time the unit has been performing production.
A low Availability indicates a large proportion of non-running time. The
LOW
Availability can be increased by extending the unit Uptime, which implies
AVAILABILITY reducing the unit downtime. For the crushing units, the downtimes can
be seen in the Stop Table sheet.
A high Availability indicates a large proportion of running time. This
HIGH
implies that the downtime and non-controllable time both are low. A unit
AVAILABILITY with high Availability can provide useful information on how this can be
achieved. The user should learn from this and apply it on other units.
To show the ratio between Availability and Utilised Uptime, a rate called
UTILISED
Utilised Uptime is displayed in OPT. The Utilised Uptime is the proportion
UPTIME
of the Available time that has been utilised for production.
A low Utilised Uptime indicates a low utilisation of the time the unit
LOW
actually has been available for production. This shows a possibility to
UTILISED increase the production of the unit by only increasing the utilised time,
without increasing the available time of the unit or reducing the unit stop
UPTIME
time. In fact, if the availability increases and the production time is
constant, the Utilised Uptime will decrease.
XIII |
Chalmers University of Technology | OPT Manual
Developed by: Anton Kullh & Josefine Älmegran, 2012
A high Utilised Uptime indicates a high utilisation of the time the unit has
HIGH been available for production. An Utilised Uptime of 100% indicates that
UTILISED all available time has been utilised for production. This means that both
the available time and the utilised time have to be increased in order to
UPTIME
increase production. A unit with high Utilised Uptime can provide useful
information on how this can be achieved. The user should learn from this
and apply it on other units.
PAIN The Pain charts show the top 6 highest Pains for the crushing unit.
A high Pain of a failure indicates one of the following:
HIGH
‐ High frequency of failure
PAIN ‐ Large downtime caused by failure
‐ Both high frequency and large downtime caused by failure
High Pains points out what areas cause most problems and should be
investigated and resolved.
Mean Time Between Failures (MTBF) shows the average elapsed time
MTBF
between failures of the unit.
L OW A low MTBF indicates that the unit fails frequently. The aim is to
MTBF maximise the MTBF. Actions should be taken to investigate how to solve
the problem.
A high MTBF indicates that the unit does not fail frequently. A unit with
HIGH
high MTBF can provide useful information on how this can be achieved.
MTBF
The user should learn from this and apply it on other units.
MTTR
Mean Time To Repair (MTTR) shows the average time required to repair
the failed unit.
HIGH
A high MTTR indicates that the downtime per failure is long. The aim is to
MTTR
minimise the MTTR. Actions should be taken to investigate how to solve
the problem.
L OW
MTTR A low MTTR indicates that the downtime per failure is short. A unit with
low MTTR can provide useful information on how this can be achieved.
The user should learn from this and apply it on other units.
XIV |
Chalmers University of Technology | Teleoperation of Autonomous Vehicle
With 360° Camera Feedback
Master’s thesis in Systems, Control and Mechatronics
OSCAR BODELL
ERIK GULLIKSSON
Department of Signals and Systems
Division of Control, Automation and Mechatronics
Chalmers University of Technology
Abstract
Teleoperation isusingremotecontrolfromoutsidelineofsight. Theoperatorisoften
assisted by cameras to emulate sitting in the vehicle. In this report a system for tele-
operation of an autonomous Volvo FMX truck is specified, designed, implemented
and evaluated. First a survey of existing solutions on the market is conducted find-
ing that there are a few autonomous and teleoperation solutions available, but they
are still new and information is sparse. To identify what types of requirements are
needed for such a system and how to design it a literature study is performed. The
system is then designed from the set requirements in a modular fashion using the
Robot Operating System as the underlying framework. Four cameras are mounted
on the cab and in software the images are stitched together into one 360◦ image that
the operator can pan around in.
The system is designed so that the operator at any time can pause the autonomous
navigation and manually control the vehicle via teleoperation. When the operators
intervention is completed the truck can resume autonomous navigation. A solution
for synchronization between manual and autonomous mode is specified. The truck is
implemented in a simulation where the functionality and requirements of the system
is evaluated by a group of test subjects driving a test track.
Results from simulation show that latencies higher than 300 ms lead to difficulties
when driving, but having a high frame rate is not as critical. The benefit of a full
360◦ camera view compared to a number of fixed cameras in strategic places is not
obvious. The use of a head mounted display together with the 360◦ video would be
of interest to investigate further.
Keywords: Teleoperation,autonomousvehicle,surroundview,remotesteering.
i |
Chalmers University of Technology | 1
Introduction
Autonomous vehicles is one of the more exciting areas of the automotive industry
today. The general perception is a fully automated car where the driver can handle
other matters while the car is driving by itself and this is starting to become a
reality with passenger cars. However construction machines and industrial vehicles
are different. In many cases their purpose is not to transport the driver from one
point to another as in a car but instead perform tasks at a work site. Even if the
equipment will be able to carry out the work without support from an operator
there is a need for supervision and to be able to take control remotely if something
unexpected happens. For this to function a system has to be designed, configured
and built as a control center. In this control center an operator will be able to
supervise the vehicle and monitor the status of the vehicle. If needed the operator
can take control and give the vehicle appropriate commands for it to be able to
continue its task.
Consequently, in autonomous operation, no driver is there to supervise and operate
the vehicle if something goes wrong. An example of a scenario is when a vehicle gets
stuck behind an obstacle and cannot find its way around it. Instead of deploying
an operator to go to the vehicle and drive it, this can be done remotely. This is
in many cases safer and more efficient. Therefore teleoperation is an helpful tool
before the vehicles are fully autonomous and can handle all types of obstacles on
their own.
The work will focus towards a generic solution that can be scaled and utilized
on different vehicles for several applications. The design will be flexible and the
system will be implemented towards an all-terrain truck where it will be tested and
evaluated.
1.1 Purpose & Objective
The purpose of this thesis is to specify requirements for, design and implement a
prototype of a system for teleoperation of a normally autonomous vehicle. The
existence of standards will be investigated. If present, the standards will be adhered
in the development and implementation of the system. In any case a general and
scalable solution that can be used on several types of vehicles will be developed.
An interface towards the autonomous vehicle will be created together with a control
center with controls and information from the vehicle. The system can be used
when the vehicle cannot navigate autonomously or is in a situation when it is more
1 |
Chalmers University of Technology | 1. Introduction
convenient and/or safe to operate the vehicle remotely.
1.2 Main Research Questions
In order to fulfill the purpose, the following questions will be answered:
• How shall camera images, maps and sensor data be presented in order to
maximize the safety and efficiency of the operation?
• At what level does the operator control the vehicle? As if sitting inside or are
more high level commands (i.e. "Go to unloading location") issued? How do
delays in the communication channel affect the choice of control?
• Are there existing standards for remote control of autonomous vehicles?
• How can the system be scalable to a variety of different sensors depending
on application, and what are the requirements of the communication link for
different types of sensors?
• How will the vehicle switch between autonomous operation and manual con-
trol? Whatsystemhaspriorityindifferentsituations, whatkindofhandshakes
are needed?
1.3 Boundaries
The communication link from the control center to the vehicle itself will not be
implemented, but requirements will be specified. No autonomous functions will
be developed, those are assumed to already exist in the vehicle. Maps used for
autonomousnavigationandpresentationtotheuserareassumedtoexisteverywhere
the vehicle is driven.
For teleoperation and autonomous control, only vehicles will be investigated since
they operate in a similar fashion, often with steering wheel and pedals or joysticks.
The implementation and evaluation will be carried out on an all-terrain truck with
no tasks other than transporting and unloading goods.
The work will be carried out during 20 weeks in the spring of 2016 on readily
available hardware. Due to of the limited time frame, open source solutions such as
ROS and OpenCV will be used to speed up the development.
1.4 Method
Initially a literature study of teleoperation of autonomous work vehicles and a mar-
ket survey of existing solutions (see 3 - Existing Solutions and Standards) were
performed to gain knowledge of the different parts and aspects in the system and
what insights can be gained from previous solutions. Several manufacturers have
models that drive autonomously and are expanding to more models and features.
However most projects are still in development stage and tests. Theory regarding
2 |
Chalmers University of Technology | 1. Introduction
latency, video feedback and additional sensor information that is relevant to the
operator has been gathered including different type of communication technologies
to transfer commands and sensor data.
In order to build a prototype system a certain number of requirements have to be
specified to aid the system design. These requirements can be found in 4 - System
Requirements where each requirement is prioritized from one to three depending on
the necessity in system design, and it is also specified how the requirement will be
evaluated.
The prototype system is divided into two parts, the control center and the actual
vehicle which is presented in 5.1 - System Architecture together with presentation of
each subsystem in 5.3 - Subsystems. The choice of dividing the system into smaller
subsystems, is to make the solution flexible and scalable since different subsystems
can be added or removed due to different applications and sensors available.
Evaluation of the system and its subparts is done in a simulation environment de-
scribed in section 5.5 - Gazebo Simulation . In this environment it is possible to
test each part of the system and evaluate against the requirements. The simulation
is also used to evaluate if the supporting functions are beneficial for the operator
together with complete system design evaluation. The ability to make changes to
the system and measure the affect in performance by increasing latency, varying
quality in the video feed, using different support functions and limiting the types of
control input is implemented.
The results gained from the evaluation is then compared to the specified require-
ments if these are met or not in terms of both driving experience and system design
in 6 - Results and 6.3 - Evaluation. From the results conclusions are drawn on how
implementation of teleoperation in an already autonomous vehicle shall be imple-
mented and experience gained from the evaluations and tests. This together with
thoughts on future work is presented in 7 - Conclusion & Future Work.
1.5 Thesis Contribution
There are several autonomous or teleoperated work vehicle solutions today. How-
ever the solutions are often implemented on a specific vehicle type from the original
equipment manufacturers and retrofit solutions typical lack the ability to control
autonomously. Therefore an integrated system is proposed where both autonomous
and teleoperated technologies are combined into one control center for monitoring
and control of autonomous vehicles. The ability to pause an ongoing autonomous
mission and manually control the vehicle and then resume the mission is an impor-
tant function. The system is scalable and flexible depending on the type vehicle and
application. The ability to define new autonomous tasks or sequences while driving
the vehicle remotely has not been seen in any other solution today which is a fea-
ture that will be beneficial for the operator. The stored autonomous tasks could be
driving a path as in this case, but also control of equipment etc. Different operator
assists are evaluated to assess which ones that are important for the operator to
3 |
Chalmers University of Technology | 1. Introduction
maneuver the vehicle safely and precise.
Existing solutions for teleoperation use multiple fixed cameras that the user can
choose from. Switching between different cameras causes the operator to have to re-
orientfromthenewpointofview. Theproposedsystemusesa360◦ videoimagethat
the operator can pan in, as if looking around in real life. This is expected to improve
telepresence. Variations in frame rate and latency are explored in order to investi-
gate how much is acceptable for a safe and efficient operation.
1.6 Thesis Outline
This thesis paper is divided into seven chapters. The chapter 1 - Introduction is
followed by chapter 2 - Application Overview. It briefly describes the setup and
some key features. Further 3 - Existing Solutions and Standards follows where the
results of the a market survey is presented. The specified 4 - System Requirements
are then presented with background and evaluation method. Then the 5 - System
Design is described first with system architecture followed by the subsystems and
ends with simulation set-up. This is followed by 6 - Results where the results from
the simulation are given and lastly the conclusions are stated in 7 - Conclusion &
Future Work.
4 |
Chalmers University of Technology | 2
Application Overview
The proposed system in this paper is designed to be applicable to a variety of
different vehicles and machines. However it will be implemented and tested as an
all terrain haulage truck used in a known closed off environment. The aim of the
system is to be able to control the vehicle without being in the physical vicinity of
it. This will be done by relaying information to the operator from the vehicle such
as video streams and sensor data. The operator will be able to send control inputs
to the vehicle in order to maneuver it. The presented implementation consists of
functionality and software that can be used for teleoperation control and can be run
onaregularpersonalcomputer. Theprimarypurposeistoevaluatetherequirements
set and help answer the research questions stated in introduction. Hence it is not a
final control center ready for commercialisation.
2.1 Level of Autonomy
There are many ways of controlling a vehicle, but the most common way is still with
the operator sitting inside the vehicle driving it manually. Other ways are remote
and autonomous control, and these technologies are often divided into three levels
of control. The first is manual remote control [1] which contains no autonomous
functions. The operator controls the vehicle from a near distance where the vehicle
can be viewed directly while operated. This is often referred to as line of sight
control.
The next level is teleoperation where the operator is located off site and some sort
of monitoring is needed i.e. cameras, force feedback control or other sensor data.
Teleoperation[2]canbothbelocalwheretheoperatorislocatedclosetothemachine
but not in visible range. It can also be global where communication needs to be
relayed via the Internet or by a satellite link. Different kinds of autonomous tasks
can be used by the operator at this level.
The third step is a fully autonomous vehicle [3] that can carry out tasks on its own
with no guidance of an operator. The requirements are significantly higher at this
level in terms of positioning, operation and safety. The tasks can be predefined
and depending on situation the vehicle must be able to make its own decisions
[4].
5 |
Chalmers University of Technology | 2. Application Overview
2.2 Evaluation vehicle
The vehicle that is used for the prototype is a Volvo FMX [5] construction truck
equipped with a variety of additional sensors such as an IMU (Inertial Measurement
Unit) to measure orientation, LiDAR (Light Detection And Ranging) sensors to
measure distance to surroundings and centimeter precision positioning using RTK-
GNSS (Real Time Kinematic - Global Navigation Satellite System). Details about
the sensors can be viewed in section 5.3.5 - Additional Sensors . The truck has
autonomous capabilities implemented and can follow a pre-recorded path with the
position, orientation and desired speed of the truck at discrete waypoints along the
path, called bread crumbs [6]. As of right now there exists no other path planning
except manually driving and recording a path. Actuators and interfaces for steering
and controlling brake and throttle are available. The vehicle is implemented in the
simulation software Gazebo (see section 5.5 - Gazebo Simulation ) to be used for
the evaluation, a screenshot can be seen in Figure 2.1.
Figure 2.1: Screenshot of the evaluation vehicle in the Gazebo simulation.
2.3 Cameras and stitching
Inadditiontothesensorsalreadymountedonthetruck,itisequippedwithanumber
of cameras mounted so that a full surround view from the truck will be achieved.
These camera images are then stitched together to a single image containing all
camera streams. The operator will then be able to pan around in this image in
order to emulate looking around while sitting in the vehicle.
2.4 Visualization and Operator Support
The stitched camera feed in this prototype is shown in a window on the computer
running the system. On top of the video, relevant information for the operator is
6 |
Chalmers University of Technology | 2. Application Overview
overlaid. These can be a map, vehicle speed, vehicle width markings and other
types of support. When the user pans in the video feed, the overlaid information
stays in place, but the video below will move. Using the GNSS data the position
and heading of the vehicle is displayed in the map. The range information from the
LiDARs is integrated so that unknown obstacles are shown in the map. The whole
image can be seen in Figure 2.2
Figure 2.2: Screenshot from the stitched video feed overlaid with map,
speedometer and align assistance for autonomous paths.
2.5 User Controls
This prototype system uses multiple control inputs to evaluate different types of
controls. Simple consumer inputs are used such as a steering wheel or a gamepad
normally used for computer games. In addition to driving the vehicle, buttons are
used to control operator support functions used while driving, such as zooming in
a map or panning in the video feed. A simple user interface is present for the
operator to change settings for the maps and autonomous functions. In addition it
shows vehicle information together with a map. The user interface can be seen in
Figure 2.3
2.6 Autonomous Functions
The truck can follow pre-recorded paths autonomously. When in autonomous mode
the truck will stop for obstacles using the data from the LiDAR sensors. The truck
will then wait until the obstacle disappears. One of the primary purposes of this
teleoperation system is to control the vehicle when it cannot navigate autonomously.
That could be when it has stopped in front of an stationary obstacle. Therefore the
system can interrupt the autonomous navigation and take control over the vehicle
so the operator can drive manually. When the manual override is done, the operator
7 |
Chalmers University of Technology | 3
Existing Solutions and Standards
In order to gain insight and conform to standards a literature study and a market
survey was conducted. The survey was conducted by investigating the solutions
offered in the market today mainly by the information given by respective manu-
facturers website together with press releases and articles. Since most of the inves-
tigated solutions are new, information about the systems and the performance is
limited.
The survey is divided into three parts. First the findings from a study of existing
standards that apply for this prototype is presented. Then remote control sys-
tems integrated into the vehicle from the manufacturer called Original Equipment
Manufacturer (OEM) solutions are presented. Following are aftermarket or retrofit
solutions where existing equipment is augmented with third party technology. Each
solutionisbasedoneitherofthefollowingthreecategoriesoracombinationofthem;
Line Of Sight (LOS) remote control, teleoperation or fully autonomous functional-
ity.
3.1 Existing standards for remote control
Using standards allows for a more unified market where accessories are compati-
ble with different platforms and equipment from several manufacturers can work
together. But by creating a closed ecosystem the manufacturer can sell their own
products or products from selected partners. The standards relevant for this project
are standards that dictate how to send commands to an autonomous vehicle or
pause an ongoing autonomous mission. Literature and the main standard associ-
ations (ISO, IEEE, ANSI, IEC etc) were surveyed but standards for this specific
application has not been developed yet. Standards exist for testing this type of
product ready for production, but since this is an early prototype it is not applica-
ble.
3.2 Original Equipment Manufacturer Solutions
A number of OEM solutions have been examined with the following manufacturers;
Caterpillar, Komatsu, Hitachi/Wenco, Sandvik and Atlas Copco. All these com-
panies have a complete solution on the market and further implementations are
undergoing. The majority of these implementations are in-house solutions that only
9 |
Chalmers University of Technology | 3. Existing Solutions and Standards
work with the manufacturer or specific partners’ vehicles and machines. The results
from the OEM survey can be viewed in Table 3.1 and 3.2.
Table 3.1: OEM solutions table 1 of 2
Caterpillar Caterpillar Caterpillar Komatsu Hitachi/Wenco
AHS1
Model/Type D10T/D11T[7] 793F[8] R1700G[9] AHS1[11]
930E/830E[10]
Vehicle/Equipment Bulldozer MiningTrucks WheelLoader MiningTrucks MiningTrucks
Operation Area SurfaceLevel SurfaceLevel Underground SurfaceLevel SurfaceLevel
LOS Remote Yes N/A No Yes Yes
Teleoperation Yes N/A Yes Yes Yes
Autnonomous No Yes Semi Semi InDevelopment
Multiple Vehicles N/A Yes,CatMinestar N/A Yes Yes
Radio E-Stopat
Communication WiFi N/A N/A
0.9/2.4GHz 919MHz
Table 3.2: OEM solutions table 2 of 2
Sandvik Sandvik Sandvik
Atlas Copco Atlas Copco
AutoMine AutoMine AutoMine
Benchremote[16]
Model/Type AHS1[12] Loading[13] SurfaceDrilling[14] Scooptram[15]
SmartROCD65
Vehicle/Equipment Trucks Loaders DrillMining WheelLoader Drilling
SurfaceLevel/
Operation Area Underground Underground SurfaceLevel Underground
Underground
LOS Remote N/A Yes Yes Yes Yes
Teleoperation N/A Yes Yes Yes No
Autnonomous N/A Semi No Semi Yes
Multiple Vehicles Yes Yes Yes N/A Yes,upto3
Communication N/A N/A N/A Bluetooth/WiFi WiFI
Caterpillar’s Minestar system [17] is a complete system for mining activities from
monitoring, diagnosing, detection and command. The system is scalable to fit dif-
ferent needs and expandable for development. Komatsu [10] has a similar system
as Caterpillar’s. They both function by sending certain commands for final position
and speed, and the trucks will navigate autonomously. Positioning is done using
GNSS which requires that the tasks are performed above ground. Hitachi/Wenco
are developing a similar autonomous haulage system [11] that it is to be launched
2017.
AutoMine is a system developed by Sandvik [12] which is one of the world’s lead-
ing companies in automation of mining operations. AutoMine consists of mainly
three different parts; AHS1, loading and surface drilling. The AHS works similar
to previously described competitors Cat and Komatsu. The AutoMine Loading can
be controlled by teleoperation and has the ability to drive autonomous when trans-
portingtheload. Theoperatorcanthereforehandlemultipleloaderssimultaneously.
AutoMine surface drilling is a remote controlled drilling machine solution that can
be operated from both local and global sites. Multiple drills can be operated simul-
taneously by one operator.
AtlasCopcohasasimilarundergroundloadingsolutionasSandvikwiththeirScoop-
tram [15]. The loading is done by teleoperation but transportation can be done
1Autonomous Haulage System
10 |
Chalmers University of Technology | 3. Existing Solutions and Standards
autonomously. In addition Atlas Copco has an operating station for remote con-
trol of drilling machines [16]. Each station can handle up to three different drilling
machines simultaneously, but the station has to have free line of sight in order to
function.
3.3 Retrofit Solutions
There exists several after market solutions for remote control of vehicles and ma-
chines. Most of the systems use line of sight remote control but some offer complete
solutions for autonomous driving and monitoring. For most of the solutions the
operation needs to be located at surface level since the use of GNSS. The results
from the retrofit survey can be viewed in Table 3.3 and 3.4.
Table 3.3: Retrofit solutions table 1 of 2
Remquip ASI Robotics Hard-Line TorcRobotics
Model/Type [18] Mobius,NAV[19] [20] [21]
Mining,Trucks Construction Construction
Vehicle/Equipment HydralicMachines
Cars,etc.. Vehicles Vehicles
SurfaceLevel/
Operation Area SurfaceLevel SurfaceLevel SurfaceLevel
Underground
LOS Remote Yes Yes Yes Yes
Teleoperation No Yes Yes Yes
Autnonomous No Yes No Semi
Multiple Vehicles No Yes No N/A
Communication Radio N/A Radio/WiFi N/A
Table 3.4: Retrofit solutions table 2 of 2
AEE Taurob Oryx Simulations
UniversalTeleoperation Interfacefor
Model/Type [22]
Control[23] Teleoperation[24]
SmallerConstruction ConstructionMachines,
Vehicle/Equipment 3DSimulators
Machines Trucks
SurfaceLevel/
Operation Area SurfaceLevel N/A
Underground
LOS Remote Yes Yes N/A
Teleoperation Yes Yes N/A
Autnonomous Yes No No
Multiple Vehicles Yes No No
Communication WiFi N/A N/A
The companies most relevant to the project are ASI Robotics (Autonomous Solu-
tions, Inc) and AEE (Autonomous Earthmoving Equipment). Both have solutions
for autonomous driving. ASI robotics’ solution [19] can be used on several different
kinds of vehicles, from ordinary cars to construction and farming machines. Their
product is scalable from LOS remote control to autonomous driving of several ve-
hicles with their Mobius and NAV devices. The system is closed so it is difficult
to combine with other solutions. AEE can control smaller construction machines
autonomously. Similar to ASI the system is scalable from LOS remote control to
autonomous control with path planning.
Oryx Simulations does not offer remote control for vehicles but builds 3D simulators
[24] for construction vehicles. It is therefore interesting how the cab interface has
been implemented to achieve a realistic simulation of a real vehicle.
11 |
Chalmers University of Technology | 4
System Requirements
Forthesystemtobehaveasintended, anumberofrequirementshavetobespecified.
What types of requirements are needed, how they influence the system and what
the actual requirement is, is described below. If applicable it is also stated how
the requirement will be evaluated. The background of the requirements origins
from studies of existing literature. These requirements can be viewed in Table 4.1
and have been prioritized depending on the importance for the functionality in the
system. Priority 1 is the highest and is set to requirements that are vital for the
system to work as intended. Properties with priority 2 are requirements that are to
be implemented, but the system would still work as intended without them. Priority
3 are features to expand and enhance the system. These are features that would be
interesting to evaluate to see if there is a performance increase.
Some requirements are implemented so that the value can be varied to test if it
affectsperformanceofoperation. Thisisdoneinordertoevaluateiftherequirements
specified are appropriate. Including for instance video latency, frame rate variations
or proximity indication that can be enabled or disabled. This is also specified in
Table 4.1
4.1 Identification of Requirements
Before creating the different parts of the system described in Chapter 2 - Applica-
tion Overview, requirements for each part needs to be specified to achieve a certain
performance, driveability and level of safety. Cameras are used to create the sur-
round view of the truck and requirements on a certain field of view and frame rate
for the image are set. Relevant information has to be presented to the operator and
therefore it is specified what kind of information and how it should be presented,
this can include sensor information, maps, vehicle status etc.
Keeping latency or delay time small in the system is of great importance for remote
control. A total round trip time from that the operator gives input to the system
to the operator gets feedback from video and maps is set. Latencies in the different
subsystems would be beneficial to measure for evaluation purposes. This can be
done inside ROS since each message has time stamps and ROS uses synchronized
time (see section 5.2 - Framework - Robot Operating System ).
Since the vehicle has autonomous functions implemented requirements are needed to
make sure that the transition between the teleopration and autonomous mode is in a
stablestateatalltimeandalsowhatwillhappenwhentheautonomousfunctionality
13 |
Chalmers University of Technology | 4. System Requirements
Table 4.1: Requirementsonthesystemforteleoperation, priorityandhowtoverify
them
Criteria Value Variable Priority Verification
Autonomous synchronization
Manualtakeoverfromautonomous 1 S
Resumeautonomousaftermanual 1 S
Autonomousstart Frompathstart 1 S
Anywhereonpath 2 S
Autonomous tasks
Recordnewpaths
inteleoperationmode 2 S
Communication link
Latency Max20ms Yes 1 L
Datacapacity Min17Mbit/s Yes 1 C
Orientation Map
Fixedmapandrotatingvehicle On/off 2 S
Rotatingmapandfixedvehicle On/off 2 S
RepresentationofLiDARdata On/off 2 S
Sensor data presentation
Speedometer Visible 1 S
Vehicleattitude Visibleatdanger 3 S
Distancetoobstacles Visiblewhenclose 2 S
Proximitywarning Visiblewhenclose On/off 3 S
Teleoperation
Speedlimit 30km/h 1 S&T
Desiredsteeringangle 1 S
Desiredacceleration 1 S
Desiredbreaking 1 S
Gearboxcontrol 2 S
Parkingbrakecontrol 2 S
Controltypes Steeringwheel, 1 S
Gamepad,Joystick 2 S
Video
Latency max500ms Yes 1 I
Framerate min15FPS Yes 1 I
Fieldofview 360◦ 1 S
Imagequality Roadsign,15metres Yes 2 T
T=Livetest,S=Verifyinsimulation,I=Implementmeter,L=Measurewithping,C=Measurewithiperf
cannot handle a certain situation. The operator should have the ability to abort
the autonomous drive and take over by driving manually but also resume paused
autonomous tasks.
4.2 Video Feedback
To percept the environment of the vehicle, video is a very important tool. Different
ways of presenting the video to the user have effects on the operators ability to
handle the vehicle. A narrow field of view makes it difficult for the driver to navigate
14 |
Chalmers University of Technology | 4. System Requirements
correctly because there is no peripheral vision where walls, ditches, lines or other
objects can be seen. This is known as the "keyhole effect" [25][26]. It has been
found [27] that a restricted field of view negatively influences the users ability to
estimate depths and the perception of the environment. Because a driver relies on
the "tangent point" [28] when driving through a curve it makes it more difficult to
navigate through curves with reduced peripheral vision.
A wide field of view can counteract these negative effects, it will be easier for the
operator to interpret the surroundings, navigate and control the vehicle. But since
the larger image is presented in the same visual space, there is a lack of detail in the
image compared to a closer one. The big quantity of visual information makes the
perceived speed much higher. This can make the operator drive much slower than
needed [29], resulting in an inefficient operation.
The aim for the field of view is to get a complete 360◦ view around the vehicle.
However depending on the presentation to the operator either using a monitor setup
or head mounted display (HMD) the presented field may differ. If a HMD is used,
the full 360◦ view will not be displayed but instead the operator will be able to "look
around" in 360◦. The monitor setup also dictates how much of the image that will
be shown. With a smaller monitor it might be better to display a smaller view of
the surroundings and to let the user pan, with multiple monitors maybe the whole
image can be displayed to create a full view.
The frame rate of the video stream is important to get a smooth experience and
enough visual information when viewing the video stream, and it is specified to a
minimum of 15 FPS. The frame rate will be measured in the video processing to
evaluate if the set requirement is appropriate.
A proposed solution to evaluate the quality of the images is that a human should
be visible or that a road speed limit sign should be possible to be read at certain
distances. The distance required depends on the travelling speed of the vehicle, the
faster the vehicle moves the longer the stopping distance will be. It is here specified
to 15 meters. Obstacles needs to be observed early enough to stop the vehicle if
necessary.
4.3 Latency Effects
The delay time from the input to the response the operator experiences is known
as latency. This is of one the most challenging problems [30][29] to deal with in
remote control of vehicles. Depending on the amount of latency it may not even
be possible to achieve manual remote control. This is because the system might be
unstable if it takes several seconds for the vehicle to respond to the commands from
the operator. The video and sensor data which is the response to the operator will
be old and therefore incorrect. However humans are able to compensate for delays
[30] and instead of making continuous inputs, the operation will turn into a stop and
wait scenario when latency reaches about one second. Large delays will therefore
impact the safety, operation times and also the performance and efficiency.
15 |
Chalmers University of Technology | 4. System Requirements
Largelatenciescaninducemotion/cybersickness[31]asthevisualeffectslagsbehind
reality. High latency will also reduce the perceived telepresence [29], the perception
of being present in a virtual environment. In the presence of large latencies, the
operator might not be able to see an obstacle emerging suddenly into the trajectory,
and thus not being able to avoid it or brake in time. Therefore it is important
that there is an automatic emergency braking system [32] in place if the latency is
large.
Since is of great importance to keep the delay time low to get good performance,
the total round trip time from input controls to video feedback is set to 500 ms and
it will be measured through all subsystems in the simulation to achieve the total
latency.
4.4 Sensor Data Presentation
Relevant data needs to be presented to the operator and therefore requirements
are set to present the speed of the vehicle, attitude, distance to upcoming objects
together with proximity warning. This information can be presented either on a
separate monitor screen or as head-up information in the video stream. In order
not to show unnecessary data to the operator, the attitude of the vehicle together
with the distance to objects may only be visible when needed as the vehicle getting
close to a dangerous attitude or close to objects and obstacles. Other types of data
that is of interest for driving and monitoring the vehicle that needs to be presented
could be vehicle fault codes, fuel usage, gear indicator, rpm etc.
4.4.1 Map
A map of the surroundings is needed to display the vehicle together where obstacles
and work areas are located. The vehicle is seen from a top-down view where it
is either fixed with the map rotating or a fixed map with the vehicle rotating as
mentioned in section 5.3.3 - Maps . The size of the vehicle and distance to near
surroundings in the map should be displayed true to scale to give the operator a
better intuition of how far the vehicle is from an obstacle.
4.5 Driver Control Inputs
When maneuvering the vehicle in teleoperated mode the natural choice is a steer-
ing wheel with throttle and brake pedals in order to mimic sitting in the vehicle.
However, evaluating other types of control inputs could show that different types of
inputs improves operation such as gamepads and joysticks. Consequently, multiple
inputs are required for evaluation in this implementation, more about this can be
found in 5.3.4 - Control Inputs.
16 |
Chalmers University of Technology | 4. System Requirements
4.6 Autonomous Synchronization
The takeover between manual teleoperation and autonomous driving has to be spec-
ified. When the vehicle is driving in autonomous mode the operator should be able
to take control of the vehicle at any point independent of the state of the vehicle.
When in manual mode it should be possible to start autonomous tasks and also re-
sume tasks if interrupted. Autonomous tasks and paths should be able to be defined
while driving in teleoperation mode and stored for later use.
The autonomous vehicle follows pre-recorded paths (see section 5.4.2 - Autonomous
Functions). In order to start autonomous navigation the vehicle needs to be stopped
on such a path before the autopilot is engaged. The vehicle will then follow the path
until it reaches a point on the path specified by the operator or the end of the path
and it will stop. If the vehicle is driving autonomously the system will always be
able to switch over to manual control. The vehicle will then stop before manual
control is granted. A requirement is that the vehicle should be able to resume its
autonomous drive after the manual control. This requires the operator to stop the
vehicle on the current path and order it to resume.
4.7 Communication Interface
To control the vehicle, interface commands are needed to be transmitted from the
control center to the vehicle. These commands have to be specified to meet the
systemrequirements. Essentialcommandstocontrolthevehicleinbothautonomous
and teleoperation mode are desired steering angle, throttle and brake. For full
maneuverability in teleoperation mode commands for shifting gears are required to
be able to reverse together with parking brake commands. More specific commands
for the system can be the ability to tip the platform etc. Other useful commands
are control of the lights on the truck which includes high beams to use in darkness
and turn signal lights to signal the direction in intersections etc. This will require
access to the vehicle’s CAN (Controller Area Network) interface on the real truck
which is the data bus on the vehicle but in the simulation this does not exist.
Status messages from the vehicle to the control center are required to monitor the
condition and feedback from the driving. In addition to the messages from the
external sensors used, a number of data messages are needed. This can include the
actual steering angle, speedometer, rpm and gear indicator. If fault codes are set in
the vehicle these need to be forwarded to the operator in order make appropriate
actions. Other status messages that may benefit operation are different kinds of
status indicators for the vehicle. This can be indicators if high beams are being
used, fuel level, load weight and etc.
4.8 Communication Link
The communication link between the control center and the vehicle could be either
a wired or wireless link. For wireless LAN (Local Area Network) connections IEEE
17 |
Chalmers University of Technology | 4. System Requirements
802.11 the standards exist for 2.4 GHz which in its latest iteration is 802.11n and the
most recent 5 GHz technology is 802.11ac. The maximum throughput using 802.11n
is600Mbit/soverthreedatachannelsandfor802.11acthemaximumis1300Mbit/s
[33]. However when increasing the frequency used for transmitting data the range
is shortened. This leads to that using 802.11ac with 5 GHz gives higher throughput
but lower range [34]. There is more interference on the 2.4 GHz band since other
wireless protocols use this frequency such as bluetooth, radio and microwave ovens.
This will decrease throughput and range [35] together with an increasing number
of packets lost when multiple devices are transmitting at the same time. Obstacles
and interference with other devices have a direct impact on the range, therefore it
is difficult to give a specific range for WLAN. A general rule [36][37] is that for 2.4
GHz the range is up to around 50 metres indoors and up to 100 metres outdoors
and for 5 GHz it is approximately one third of these ranges.
4.8.1 Free-space Path Loss
ThelossinsignalstrengthofanelectromagneticwavecanbeexpressedasFree-Space
Path Loss (FSPL) and can be calculated in dB as
(cid:18)4π(cid:19)
FSPL(dB) = 20·log (d)+20·log (f)+20·log (4.1)
10 10 10 c
where d is the distance in metres, f is the signal frequency in Hz and c is the speed
of light in m/s. So by keeping the FSPL constant, the distance can be calculated
for some commonly used frequencies as can be viewed in Table 4.2. The FSPL is
set constant to 70 dB and the frequencies used are 240 MHz, 2.4 GHz and 5 GHz,
which is mid-range radio and Wi-Fi. As can be seen by using a lower transmission
frequency the range can be extended. But with lower frequency the amount of
data that can be transmitted is decreased. One way to utilize these properties is to
send the heavy data transmission (camera images) over Wi-Fi and smaller but more
critical commands (steering commands) over radio.
Distance (m) FSPL (dB) Frequency (Hz)
15 70 5·109
31 70 2.4·109
314 70 240·106
Table 4.2: Free-space path loss for some frequencies at constant distance
4.8.2 Alternatives to Wireless
A wired connection will affect the maneuverability of the vehicle since the vehicle
will only be able to follow one path and go back the same way in order not to tangle
the cable. This type of communication is used in mines where trucks and diggers
mainly follow the same path in a tunnel and the cable is managed on the vehicle
as it drives. By using a wired connection a higher throughput and less latency can
be achieved compared to a wireless link. The disadvantage in interference from
18 |
Chalmers University of Technology | 4. System Requirements
other radio communication with wireless is reduced to a minimal since the data
has its own medium to be transferred in with a cable. Network communication has
overhead in the transmission which negatively impacts the latency. The overhead
is considerably higher for wireless communication [38] due to more error checks and
acknowledgements.
Acombinationofwiredandwirelesscommunicationcanbeused. Themaindistance
from the control center to the work site can be a wired link and the final distance
at the site can be wireless to let the vehicle maneuver freely. If the wired part of a
combined link is reasonably short, the whole connection link can be viewed as just
the the wireless link. This is because the wireless link is slower and cannot carry
the same amount of data as the wired one.
4.8.3 5th Generation Wireless Systems
Wireless communication systems are continuously developing and the fifth genera-
tion (5G) is the next major step. However, the systems will not be fully available
until 2020 [39]. High expectations are set on this generation since more devices are
connected with the advent of Internet of Things (IoT). Vehicle remote control is
mentioned as a application of 5G. For safety critical systems such as vehicle com-
munication, the intention is to reach latencies [39] as low as 1 ms and 10 ms in
general.
A Pilot project called Pilot for Industrial Mobile Communication in Mining (PIMM)
[40] consisting of a cooperation between Ericsson, ABB, Boliden, SICS Swedish ICT
and Volvo Construction Equipment intends to implement communication using 5G
to remotely control a Volvo truck for transporting ore in an underground mine
started spring 2015 [41]. The program intends to initiate research that can be
applied in a variety of applications and solutions within the usage of 5G.
4.8.4 Link Requirement
In this application the vehicle needs to be able to be maneuvered in all directions.
A wired communication link will not satisfy this behaviour and therefore a wireless
one is needed at the worksite. This will increase latency and decrease the amount of
data that can be transmitted. The number of cameras used and other sensor data
will set the requirements on how much data that needs to be transmitted from the
vehicle to the control center.
Eachof theused cameras(see section5.3.1 - Camerasfordetails) cantransmit upto
16 384 Kb/s, and leads to four cameras transmits 65 536 Kb/s. By using half of that
bitrate from the cameras, a total of 32 768 Kb/s, or ~32 Mbit/s which will be the
minimum requirement for the communication. However, performing the stitching
process (see 5.3.2.1 - Image Stitching) onboard the vehicle and transferring the
currentviewwillreducetheamountofdataneededtobetransferred. Thesizeofthe
stitched image presented to the operator will dictate the data needed. Lowering the
requirement to 16 Mbit/s will account for a large viewing image and still lower the
requirement by half. The data for controlling the vehicle (requested steering angle,
19 |
Chalmers University of Technology | 4. System Requirements
speed etc.) will be significantly smaller. However, capacity requirements on the link
depends on what sensor data that is transmitted back to the control centre. The
mostdataconsumingsensorsfollowingthecamerasarethelaserscanners(seesection
5.3.5.1 - Light Detection And Rangingfordetails). Theytransmit720floatingpoints
of 32 bits and sends these 20 times per second. This totals to ~0.5 Mbit/s. The
odometry and GNSS data are another 20 data points which are also 32 bit floating
points. That is negligible compared to the video and LiDAR data.
The round-trip-time for a byte of data using the communication link is set to a
maximum of 20 ms. The transmission needs to be stable in terms of spikes in
latency in order not to reach the threshold for lost connection which is specified to
200 ms. Violations of the thresholds for the communication link in terms of lost
connection and packet loss has to be addressed. If the connection fails, the vehicle
shall stop in order to avoid accidents of incorrect control signals.
4.9 Safety Requirements
Safety requirements are also needed to be specified. However autonomous construc-
tion vehicles will not have the same safety requirements as road vehicles since the
work site will be a closed area. The speeds are often lower but safety and reliability
still have to be considered. To minimize risks if the controls, sensors or communi-
cation fail in some way, a speed limit in the vehicle to not exceed a certain speed
in both teleoperation and autonomous mode should exists. This speed limit is here
arbitrarily set to 30 km/h. Furthermore an auto-brake system is required in both
modes so that the truck will stop for obstacles. It should also be possible to override
the emergency stop in teleoperation mode by coming to a full stop and disabling
it. This is for instance if the LiDAR sensors are malfunctioning and making false
detections. Emergency stop buttons inside the truck and in the control center are
required.
20 |
Chalmers University of Technology | 5
System Design
By dividing the system into smaller subsystem the total solution can be scalable and
flexible as parts can be added or removed as the system develops or requirements
change. This full system consists of a vehicle and a control center for the opera-
tor, both described in the upcoming section. The used framework Robot Operating
System (ROS) is described then followed by the subsystems described including the
cameras with the stitching process, additional sensors and the autonomy in cooper-
ation with the teleoperation. Lastly the simulation set-up using the simulation tool
Gazebo is described.
5.1 System Architecture
Theproposedsystemconsistsoftwomainparts. Theuserinterface"ControlCenter"
and the autonomous vehicle "The Truck". The user interface reads input from the
operatorandrelaysittothevehicle. Thevehiclereturnssensordataandthestitched
video stream to the user interface which are displayed in order to give the operator
the best possible assessment of the vehicle state. The system is built up from smaller
subsystems called nodes that communicate with each other. The main parts of the
system can be seen in Figure 5.1 and are:
5.1.1 Vehicle
• Autonomous - The autonomous driver. Follows pre-recorded paths chosen
by the operator and sent to the truck. Uses sensors to determine its location
on the path and to avoid obstacles.
• Cameras - Four wide angle IP-cameras mounted on the vehicle with an over-
lapping field of view.
• Camerastitching-Thisnodecapturesthestreamsfromthecamerasmounted
on the vehicle and processes them in order to create one large image as de-
scribed in section 5.3.2.1 - Image Stitching. The operator can then pan the
image in order to look around the vehicle.
• Current path - Stores the current path. It is used in two cases:
1. Autonomous mode - A path that is to be followed is sent by the user
interface from the Path storage. The autonomous node will then follow
the loaded path.
2. Path recording - When recording a path it is saved into the current
path node. When the recording is finished, the path is sent back to the
21 |
Chalmers University of Technology | 5. System Design
Control Center
GUI
Controls input GUI input GUI output
System coordinator Path server Video combiner
Sensor visualization
ROS communication
Vehicle
Camera stitching
Path recorder Current path Cameras
Sensors Autonomous Actuators
Figure 5.1: System overview of control center and vehicle with communication
nodes
user interface and stored by the path server.
• Path recorder - Records when the vehicle is driven manually in order to be
able to drive the same path autonomously when the driver commands it.
• Sensors - All the sensors on board the vehicle. This includes odometry,
speedometry, RTK-GNSS, IMU, LiDAR. In addition to the sensor input some
signal processing is done in this node, such as merging all LiDARs into one
360◦ scan.
• Vehicle controls - The actual controls of the vehicle. Steering, gearbox,
handbrake, throttle, turn signals etc. These are controlled either by direct
user input in the user interface or by the autonomous node.
5.1.2 User Interface
• Controls input-Readsinputfromdifferentcontrolsurfacessuchasasteering
wheelorgamepads andtranslatestheinputto theappropriatedataandpasses
it on to the System coordinator.
• GUI - The GUI is is used by the operator to interact with the vehicle in other
ways than driving it.
– Output - Autonomous status, position, mode, control and other infor-
mation useful to the operator is shown here. A map with all available
paths can also be shown. This can in the future be expanded with more
information such as fuel level, running hours etc.
– Input - The user can select options such as start path recording, choose
22 |
Chalmers University of Technology | 5. System Design
paths to drive autonomously, and select what is shown on the map and
in the video.
• Path server - Stores all recorded paths available for autonomous driving and
provides information to both Sensor visualization and GUI for presenta-
tion. Paths are sent to the System coordinator for autonomous drive and
from the vehicle newly recorded paths are received to be stored.
• Sensor visualization - Images are created to visualize the sensor data in a
human understandable way. For instance GNSS or other localization data is
used to position the vehicle on a map, and LiDAR data is used to indicate
obstacles. Paths from the Path server node are also drawn on the map to
indicate where a autonomous operation can be initiated or completed.
• System coordinator - The node that dictates if the autonomous node or the
operator is in control. It also handles the transition between autonomous and
manual control.
• Video combiner-CombinestheimagescreatedintheSensor visualization
node with the one from the Camera stitching node to create an augmented
video feed.
5.2 Framework - Robot Operating System
All the different subsystems have to communicate with each other in a safe and
reliable way with many different message types. This would be hard and time con-
suming to implement in an efficient way. The Robot Operating System1 (ROS) is a
open source framework for this that is gaining popularity and has done so during the
past few years. It is a combination of communication, drivers, algorithms and other
tools to aid creation of robots and vehicles. This leaves more time to the developers
todevelopnewfunctionalityandfeatures, whilesafetyandperformanceconcernsare
taken care of by the underlying system. Additional benefits are flexibility, scalability
and ready made interfaces to other systems.
A typical ROS system is built up of many subsystems called nodes that send mes-
sages to each other. Nodes are easily added or removed depending on what the
application demands. The nodes are written in either C++ or Python and a vast
library of existing nodes are available. However ROS is only a few years old, and
has evolved significantly over the years the documentation available is often not
complete and not always accurate.
5.3 Subsystems
This section describes the design choices and technical solutions of the subsystems
of the whole system. Since the vehicle is operated out of sight, the operator needs
to be able to track the vehicle in its surroundings. One way for the operator to
assess the vehicle’s placement is to use cameras mounted on the vehicle in order for
the operator to see the surroundings. Another approach is to use maps where the
1http://www.ros.org/about-ros/
23 |
Chalmers University of Technology | 5. System Design
vehicle location is presented with surrounding areas, obstacles and walls. These two
methods can be combined [42] to get a more accurate positioning of the vehicle.
However, too many cameras, maps and other inputs for the operator may lead to
loss of the surroundings [29] and reduce the performance. Studies have shown that
using fewer screens but more accurate measurements gives better control of the
vehicle[25][43]. The operator may suffer from tunnel vision when operating and
concentrating on different screens simultaneously, which can lead to a loss of the
surroundings instead.
5.3.1 Cameras
To create the surround view around the vehicle, four wide-angle IP-cameras will be
mounted on the truck cab. They are placed so that the cameras overlap each other,
so that the images can be combined to one large image. This is visualized in Fig-
ure 5.2. The cameras use an Ethernet connection to transmit the data stream over
the Real Time Streaming Protocol (RTSP). This can then be fetched by a computer
for processing. The cameras were part of the pre-existing hardware inventory and
therefore used. The actual camera used can be viewed in Figure 5.3. The cam-
eras can provide a resolution of either 1920 × 1080 or 1280 × 720 pixels in H.264
or MJPEG format. The bitrate can be chosen up to 16 384 Kb/s together with a
maximum frame rate of 25 frames per second.
Camera
Image overlap
Camera FOV
Figure 5.2: Four cameras with a 120◦ FOV. Mounted to capture a complete 360◦
view.
5.3.2 Image Processing
Image processing is done using OpenCV which is an open source library for image
analysis and manipulation. It has support for Nvidia CUDA [44] for processing
using the graphics processing unit (GPU). This is a major advantage when working
with large amounts of data that has to be processes quickly such as images. The
24 |
Chalmers University of Technology | 5. System Design
Figure 5.3: IP camera used for surround view of the vehicle
GPU differs from the CPU in the way that it executes many calculations in parallel
with thousands of simpler cores rather than a few powerful as in a CPU. OpenCV
is also included in ROS (see 5.2 - Framework - Robot Operating System ) and can
therefore be used directly in the simulation or it can be used standalone to process
the streams.
The video streams from the IP cameras are processed and stitched together into
one single stream with a 360◦ coverage. Information that is crucial to the opera-
tor is then overlaid on the stitched image. One proposed solution is to use a head
mounted display (HMD) together with a spherical video feed. This can give the
operator a "virtual cockpit" where it is possible to look around by moving the head.
However this adds significantly more computations to the already demanding stitch-
ing process. The image must be warped to a spherical projection and displayed as
two images, one for each eye. The head tracking has to processed and applied to
the image. This will introduce more latency in the video feed and/or lower the
frame rate [45]. Due to limitation of time and complexity a HMD will not be im-
plemented. The solution that will be used is a setup with one or multiple monitors
where the video stream can be displayed together with a graphical user interface
(see 5.4.1 - Graphical User Interface ) with additional controls.
5.3.2.1 Image Stitching
A generic process to stitch images [46] is described below. Below this, the special
case that is used in this implementation is described.
1. Feature detection and classification -Theimagesareanalyzedfordistinct
features and these features are saved for each image.
2. Feature matching - The features found in the images are compared to de-
termine which images are overlapping and where.
3. Image comparison - Using the features found and matched in the previous
steps, the homography matrices H for relating the overlapping images are
calculated. H relates one image to an other so that the x and y coordinates for
each pixel in the transposed image p0,p0 relate to the original p ,p according
x y x y
25 |
Chalmers University of Technology | 5. System Design
to Equation 5.1.
p0
f s h
p
x x α x x
p0 = s f h ×p (5.1)
y φ y y y
1 0 0 1 1
| {z }
H
Where f translates the image, h scales it and s shears the image as can be
seen in Figure 5.4
f h s α s φ
Figure 5.4: Illustration of the effects of the homography matrix.
4. Image placement and transformation - With the matrix H from above,
overlapping images are transformed. They are then placed together so that
features overlap each other.
5. Blending - To achieve a smooth transition between the images, blending is
applied. A regular linear blend sets the destination pixel (D) to a weighted
mean of the overlapping source pixels (S1,S2) as seen in Equation 5.2
D = S1 ·α+S2 ·(1−α) α ∈ [0,1] ∀c, when x,y ∈ blend area. (5.2)
x,y,c x,y,c x,y,c
where x and y are the position of the pixel and c is the color channel of the
image. The blend area is dictated by the overlapping areas and the desired
blend width. α varies from 0 to 1 in the desired area of the blend. A wider
seam will smoothen out bigger subtleties such as exposure differences. If the
images are not exactly lined up or the homography estimation is not perfect
there will be ghosting in the seams of the images. Ghosting is when traces of
a object can be seen a little transparent in multiple locations of the combined
image.
One way to address this problem is to use multiband blending. The desired
blend area is passed through a number of band pass filters. Then the different
frequency ranges are blended separately in the same way as the linear blend.
The high frequency part of the blend area will be blended with a short seam,
and the low frequency area will be blended with a wider seam. This results in
a less distinguishable blending.
6. Projecting - The produced image is an image laying flat in a 2D plane. This
image can be projected using different mappings to suit the way the image
will be displayed. For this application a cylindrical or spherical projection
will be suitable to achive the feeling of looking around in the surrounding
environment.
26 |
Chalmers University of Technology | 5. System Design
In this case the camera properties are known and their placement is static so steps
1,2,3 and 4 only has to be done once. The homography matrix can be saved and
reused as long as the cameras do not move or are exchanged for cameras with other
properties which reduce computation.
Performance Concerns
While manipulating an image in software the image is fully uncompressed and rep-
resented as a 3D matrix; W × H × C. W and H are the width and height of the
image and C is the number of channels of the image. The number of channels of
the image is called the color space and is usually three (Red, Green, Blue or Hue,
Saturation, and Value) for color images and one for gray-scale images. Each element
of the matrix represents the amount of each channel for each pixel. This is expressed
either as a floating point number or an integer depending on quality and memory
constraints. It is shown below that the amount of data that has to be processed
quickly becomes large when image size and color depth increases.
As described in section 5.3.1 - Cameras four cameras are used. These cameras
can output images with the resolution of up to 1920 × 1080 pixels. The images
from these cameras are represented with 3 channels of 32 bit floating point numbers
(4 Bytes). Capturing the compressed images at 25 FPS and unpacking them into
matrices in order for manipulation, the amount of data totals to around 2.5 GB/s
(Eq 5.3).
W ·H ·C ·M ·n ·f = 1920·1080·3·4·4·25 ≈ 2.5 GB/s (5.3)
type cameras
Considering that the pixels then are to be manipulated, copied into one big image
and blended, the amount of data that has to be processed quickly becomes multiple
times the size of the initial captured images. Because the theoretical maximum
throughput2 of used computers (DDR3 memory) is 12.8 GB/s it is apparent that
the computer’s performance can become a bottleneck, especially if it is doing other
computations parallel to the stitching.
5.3.2.2 Information Overlay
When the operator is driving the vehicle the primary view is the stitched video
stream. Information that is important to the operator will then be overlaid onto the
video so it can be seen without looking away from the video stream. A map is shown
in the top right corner. In the lower left corner information about and distance to
the current chosen path is presented and in the lower right corner a speedometer is
displayed. This can be seen in Figure 5.5. The overlays are semi-transparent so it is
be possible to see objects behind. The process of blending an image onto another is
done by calculating a weighted average of the two overlapping pixels from the two
source images. The weight is called a mask and is a grey scale image. By performing
a threshold operation on the image to be overlaid the mask is created only where
there is image information. This part is set to a grey value allowing information
2http://www.crucial.com/usa/en/support-memory-speeds-compatability
27 |
Chalmers University of Technology | 5. System Design
from both images to be visible. The operator can customize the overlays and choose
what is shown.
Figure 5.5: 110◦ Stitched camera view from vehicle in simulation with map and
offset to chosen path.
5.3.3 Maps
Using a map where the operator can view the vehicle from a top-down perspective,
gives the operator an overview of the area to simplify navigation. The alignment of
the map can be either fixed or rotating. If the map is fixed and the vehicle rotates,
humans tend to rotate the map in their minds [47] in order to position themselves.
Using a rotating map instead, where the vehicle is fixed with the front pointing
upwards has been proven [48] to be better for remote control and maneuvering.
The map can either be produced beforehand or be created as the vehicle travels.
A predefined map will be more accurate but if the surroundings are changing over
time there is a benefit of creating the maps while moving. One of the more popular
methods for creating these maps is SLAM [49] where the vehicle is able to both
create and at the same time keep track of itself in the map.
Because the area where the vehicle is going to operate is known, the map is created
beforehand. Then it is used as a background with the vehicle inserted into it.
Because of the high accuracy of the positioning system and the pre-produced map
the vehicle’s position is presented very exact. Creation of the map together with
the vehicle and information data is done in OpenCV. Two maps are created in the
same node with one map fixed with the vehicle itself moving in it. The other map
rotates around the vehicle which is fixed pointing upwards. The different maps
can be viewed in Figure 5.6. This gives the operator the choice of change between
these two maps during operation, and the different maps can be shown in different
environments such as the GUI or overlaid in the video. In addition to the vehicle
itself the LiDAR sensor data is drawn in the map and in Figure 5.7 it can be seen
how the sensors scan the environment in the simulation and how it is presented
to the operator. The LiDAR data provides useful information on how accurate the
28 |
Chalmers University of Technology | 5. System Design
(a) Fixed map with rotating vehicle (b) Rotating map with fixed vehicle
with north upwards. pointing upwards.
Figure 5.6: Overview maps with surroundings and recorded paths.
positioning of the vehicle in the map is. But the primary purpose is so that obstacles
that are not in the map are drawn. This could be other vehicles or other objects.
Depending of the distance the color changes from green at a safe distance via yellow
to red if it is dangerously close. The stored paths are also drawn out on the map.
This is both to aid planning the use of autonomous functions, and to help navigate
to a selected path.
(a) Map with obstacle detection. (b) Obstacle and laser scan from simulation.
Figure 5.7: LiDAR sensor data presentation.
5.3.4 Control Inputs
Different types of control inputs are implemented in the system to have the ability
of evaluate the performance implication from the different controls. A interface for a
normal steering wheel with pedals for throttle and brake made for computer games
is implemented. Further, two different gamepad controllers are interfaced alongside
with a traditional computer keyboard. In addition to the controls for steering,
acceleration and braking, commands for zooming in the map in the video stream
29 |
Chalmers University of Technology | 5. System Design
reads this time and compares it to its clock and using this offset calculates how far
away each satellite is. With this knowledge about multiple satellites the receiver can
calculate its position. The more satellites that the receiver can see, the more exact
is the calculated position. This is used to track the vehicle’s position in the world
frame in order to navigate and visualize this information on a map. The system
consists of a primary GNSS unit and a secondary antenna. With this setup both
position and direction can be measured. The system has support for Real Time
Kinematic (RTK) GNSS and network RTK. This is a system [52] that measures
the phase of the GNSS carrier wave instead of the actual data. This wave is then
compared to the phase at a know location. This technology allows positioning with
a few centimeters accuracy compared to meters with conventional GNSS. A major
limitation of the technology is that it only works close to the reference point. If
there is a network of known reference points with GNSS receivers over a large area
the phase of the carrier wave at a specific location can be calculated and set to the
receiver. This is known as network RTK and can be used if the vehicle is to be
used in large areas, or different areas where there is not a possibility to install a new
reference point.
5.3.5.3 Inertial Measurement Unit
An Inertial Measurement Unit (IMU) is used to measure the orientation of the ve-
hicle in three dimensions using accelerometers, gyroscope and magnetometer. The
accelerometer measures change in velocity, the gyroscope measures the change in
angles (roll, pitch and yaw) and the magnetometer is an electronic compass measur-
ing the earths magnetic field. Using these measurements the attitude of the vehicle
can be accessed. During tests of a teleoperated vehicle performed by the US Navy
the most common incident was almost-roll-over accidents [45] where lack of attitude
perception was the biggest contribution to the incidents [53]. It has been shown
[54] that an operator tends to navigate more efficiently using a camera view that
moves with respect to the vehicle but stays fixed perpendicular with gravity. This
is compared to a camera fixed to the vehicle with a roll attitude indicator overlaid
the video feed. Because of the used fixed camera configuration if a dangerous an-
gle is read, such as driving with a large sideways tilt, this will be displayed to the
operator.
5.4 System Coordinator
The nodes in the system are coordinated by a coordinating node. It keeps track of
the states of the system and issues commands depending on the inputs it receives.
The main interaction with the operator is through the GUI.
31 |
Chalmers University of Technology | 5. System Design
ing while driving the vehicle. Below the functionality is described together with
synchronization between autonomous and teleoperation mode.
5.4.2.1 Navigation
When navigating in autonomous mode the vehicle follows pre-recorded paths. Using
the on-board sensors it scans the surroundings to estimate its position in the pre-
made map. If satellite positioning is available this is used as well. Available paths
are displayed in the map, and to start autonomous navigation of these paths is
chosen in GUI. The truck then needs to be driven manually to the beginning of
the path. Distance and angle offset to guide the driver to the correct position and
alignment is presented to the operator as head-up information in the video feed.
When the truck is positioned correctly the autonomous navigation can be initiated.
While driving, if the truck senses an obstacle or faces other problems it will stop
and wait for the obstacle to disappear or for an operator to start manual control.
When driving manually, autonomous navigation can be resumed by the operator
stopping the vehicle on the current path and switching over to autonomous mode
again.
5.4.2.2 Synchronization
To prevent dangerous behaviour from the vehicle when switching between control
modes,somesimplerulesfortheimplementationhasbeensetandareherepresented.
Switching from manual teleoperated control to autonomous drive can only be done
when the vehicle is stopped on and aligned to the chosen path. Then autonomous
mode can be initiated. When the autonomous driver has confirmed that the position
is valid and that navigation from there is possible, control will be granted to the
autonomous functions. When a request for manual control is sent to the vehicle it
will stop before handing over the controls. This can be overridden if the truck is
on its way to collide with something it cant see or that the autonomous functions
are failing in some other way. If the navigation is interrupted by manual control
autonomous navigation can only be resumed if the vehicle is stopped on the current
path. When the truck has reached the end of the path used for navigation, it will
stop and wait for the operator to take further actions.
5.4.2.3 Paths
When a path is chosen, the operator needs to drive to a point of that path in
order to initiate autonomous functions. In addition to the map the parallel and
perpendicular distance offset to the closest point on the path is calculated and
presented. The angular offset between the vehicle’s current heading and the heading
required by the path is also displayed. The closest point is calculated as a straight
line regardless of walls and obstacles. This is intended to be used in addition to the
map for a more precise positioning of the vehicle. Presentation of this information
can be seen in Figure 5.5 and 5.8. The information is red until the vehicle is inside
the set threshold that is needed to initiate autonomous navigation, then it is set
33 |
Chalmers University of Technology | 5. System Design
to green. Only when all three parts; parallel, perpendicular and angular offset are
inside the threshold autonomous navigation can be initiated.
The paths used for navigation are needed in several subsystems. Naturally the
autonomous path-follower needs the path to use them for navigation. But they also
needstobedrawnintothemap, presentedintheGUIandtheyareneededtoprovide
navigation assistance to the operator to drive to the path. When a path is recorded
it is stored as plain text files consisting of the points or "bread-crumbs" where each
of the points includes position together with the vehicle angle and the speed at
that particular point. Instead of all nodes that need this information knowing the
location of the files and have to access the file system a path server loads all paths.
Nodesthatneedthepathscanthenrequesttheinformationthatisneeded.
5.5 Gazebo Simulation
Bundled with ROS is a simulation tool called Gazebo. Gazebo includes physics
engines, high quality graphics and integration with ROS. This makes it straight-
forward to test the system built for the real vehicle directly with the simulation
without major modifications or additional software.
A complete model of the vehicle with control interfaces and existing sensors is set
up to test and evaluate the features of the system before moving to a real vehicle.
The model is implemented to simulate the Volvo truck described in section 2.2 -
Evaluation vehicle with the same dimensions and properties. A screenshot of model
in Gazebo can be viewed in Figure 5.9.
ThephysicsengineinGazebodoesallthecalculations, sothemajorworkinbuilding
the simulation is defining the model of the vehicle and the world. A model is built
using building blocks called links. These can have different properties, but the
most basic are visual, inertial and collision and more about this can be seen in
5.5.1 - Visual , 5.5.2 - Mass and Inertia and 5.5.3 - Collision. These links are then
fastened together with what are called joints. The joints can be of different types
depending on how the links should interact with each other which is elaborated on in
5.5.4 - Propulsion. The world is built in a similar fashion, but with multiple models
pre-defined in Gazebo, see 5.5.7 - World. When the model and world is built and
added, the inputs to the simulator are throttle, brake and steering. The simulator
outputs a visual 3D view of the vehicle in the world, poses for all links, and the
outputs from all sensors.
5.5.1 Visual
The basic building blocks when building a model are called links. A link in Gazebo
can be defined by either basic shapes or what is called meshes. These meshes are
created in Computer-Aided Design (CAD) software and exported to a shape built
up from many small polygons. This model is created from a CAD drawing of the
real truck, divided into three parts. The truck, the load bed and a wheel. The
wheel is then added eight times in different poses. For performance reasons all parts
34 |
Chalmers University of Technology | 5. System Design
have been greatly simplified to be drawn by only around 5% of the original polygons
creating the mesh. The visual part is used for the visual representation in Gazebo,
LiDAR reflections and what the modelled cameras can see. The visual part of the
truck can be seen in Figure 5.9.
Figure 5.9: The model of the Volvo FMX truck in Gazebo simulation.
5.5.2 Mass and Inertia
The mass and inertial model is made simple for both performance concerns and
because it a very exact model is not required in this application. The real truck
weighs around 22 000 kg [55] and this weight has been distributed in three blocks
and four axles and eight wheels as can be seen in Figure 5.10. The wheels weigh 50
kg and axles 150 kg each. The chassis has been modeled to weigh 4 000 kg, the cab
and engine 6 000 and the bed 10 000 kg.
Figure 5.10: The inertial model of the truck.
35 |
Chalmers University of Technology | 5. System Design
5.5.3 Collision
The collision model dictates when the model is touching another physical object in
the simulation. As in the previous sections, for performance concerns the collision
model is a greatly simplified model of the truck. The collision model is created as
a few simple shapes created in a CAD software and exported as a mesh with very
few polygons. The collision model can be seen in Figure 5.11.
Figure 5.11: The collision model of the truck.
5.5.4 Propulsion
The real FMX truck can raise its second and forth pair of wheels when they are
not needed to improve maneuverability and decrease fuel consumption. The truck
is modeled with these pairs raised, hence it has four wheel drive. As seen above the
model is built from links and joints. The joints connecting the links together can
be of different types. The most common is a fixed joint which is a rigid connection
between the links. The wheels are connected with the axles with a joint called
continuous. It is a joint that can rotate continuously around an specified axis. The
joint can be specified to have an maximum angular velocity and a maximum torque.
The angular velocity set to represent 30 km/h linear movement of the truck as is
specified in 4 - System Requirements. The maximum torque is set to 2400 Nm which
is the maximum torque of the Volvo FMX D13 engine. Connected to the joint is
a simple PID controller and the desired value to the controller is controlled by the
throttle. There is no gearbox modeled, since the gearbox in the real truck is a fully
automatic gearbox, and such realism is not needed from the model.
5.5.5 Steering
The wheels used for steering are connected to the truck with joints called revolute
which are hinge joints that can be specified to have a certain range of motion around
an axis. A position PID controller is connected to the joint setting the steering angle
ofeachwheel. Thesteeringisimplementedusinganackermannsteeringmodelwhich
is illustrated in Figure 5.12. The angles for each wheel is calculated by the following
equations:
36 |
Chalmers University of Technology | 5. System Design
L
R = (5.4)
sin(δ)
! !
L L
δ = tan−1 , δ = tan−1 (5.5)
in R− D out R+ D
2 2
where L is the wheelbase, D is the axle width, R is the turning radius and δ the
desired steering angle. δ and δ are the actual wheel angles for the inner and
in out
outer wheel. For L << R Equation 5.5 can can be simplified as δ ≈ L .
in,out D
R±
2
The maximum value of δ is 30° to represent the same maximum steering as the real
truck. The controllers are tuned to be very responsive to the desired angle of the
wheel. The dynamics of the steering is then modeled together with the calculations
of the Ackermann angles.
δin δout
L
δin δout
CC R
D
Figure 5.12: Illustration of ackermann steering, CC: center of turning circle.
5.5.6 Sensors
Four LiDAR sensors are mounted on the truck, one in each corner as described in
section5.3.5.1 - Light Detection And Rangingtogeta360◦ viewofthesurroundings.
The modelled version uses a pre-existing ROS package and is set to have same
properties as the actual lidars used.
The four cameras are mounted on the cab to get a full 360◦ view. The placement
of the cameras has been varied to test the best positioning, both in regards to
cover as much of the surroundings of the vehicle as in to give the operator a good
sense of position. The camera models in Gazebo are simple and cannot fully model
the used cameras. The basic properties are modeled, such as the resolution and
frame rate, 1920x1080 at 25 frames per second. But the warped image produced
from the fish-eyed lenses is difficult to recreate, and a wide but straight image is
emitted. To achieve this warped image as in the real cameras the video feed is
processed in OpenCV. This produces a more realistic video feed at the expense of
image quality.
37 |
Chalmers University of Technology | 5. System Design
There exists no modelled RTK-GNSS in the simulation, however the position of the
truck can be measured directly inthe Gazebo simulation. Thisis used as GNSS data
even though noise is not modeled. The Gazebo world uses a different coordinate
system than GNSS where everything is measured in meters and a origin in the
middle of the world used. A node that translates the GNSS data to meters with an
origin that can be set will has to be used for tests in real world.
5.5.7 World
The world used to test the system is a large asphalt field where a track has been
set up using a number of cones. The layout can be viewed in Figure 5.13 and is
designed to represent tunnels or partly narrow roads. The cones are tall enough for
Figure 5.13: Layout of the track used for testing
the LiDAR sensors to recognize them. The course starts at the bottom left with two
curves where the truck will drive autonomously until it reaches an obstacle in its
path. At this point the track is a bit wider. It will then stop and the operator will
have to take over and drive around the obstacle manually. After the obstacle the
operator drives the truck back on the path to resume autonomous navigation. This
is supposed to simulate another truck at a meeting point and will test interruption
and resume of the autonomous functions together will manual control. The truck
will then drive autonomously two more turns until it reaches the end of the path.
The operator will then resume in manual control and reverse into a small space.
This is to test how much the surround vision and sensor data supports the operator,
after this maneuver the track is complete.
5.5.8 Performance
The model has been compared to data from a real FMX and behaves as expected.
Collisions work as expected when diving into obstacles. The truck accelerates to 30
km/h in about 6 seconds. A real FMX does this in about 5 - 8 seconds depending
on engine and load. The turning radius is 11 meters which is on par with the real
truck [55].
38 |
Chalmers University of Technology | 6
Results
It has been found that in this application the most crucial functionality for teleoper-
ation is when an obstacle is in the path for the autonomous truck. The teleoperation
functionality has been used to define new autonomous functions for repetitive tasks
which has been proven to work well. This could be a task which is such that one
part is simple and repetitive, and one part is more challenging, for instance driving
a long road, and then empty the load in varying places. It has been shown to be
effective to define an autonomous task for the driving, and when the truck is at the
unloading place where the vehicle is not capable to do the unloading autonomously
an operator can handle it via teleoperation.
Teleoperationcanalsobeusedtorecoveranautonomousvehiclethathaseitherbeen
damaged or lost track of its position in the map. However, if sensors are damaged it
can be harder to assess the state of the vehicle and determine if it is safe to operate
without causing more damage to the vehicle or the surroundings. If the truck has
lost its position in the map, it can be more difficult for the operator to drive it since
the aid of the map will be lost.
When using teleoperation the direct coupling to the controls is missing and the so-
matic senses can not be used while driving. Many industrial vehicles today have a
mechanicconnectiontocontrolsteeringandpedals. Hapticfeedbackcouldbeimple-
mented to assess this problem. New machines and vehicles coming out to the market
have started to use steer-by-wire systems where the controls are sensors that have
artificial feedback from electric motors. Using this same feedback in a teleoperation
setting could solve this disconnection, though latency can be a problem.
6.1 Standards and System Layout
There are several standards associated with teleoperation and remote controlled
vehicles such as ASTM E2853-12 [56] which defines a test method to evaluate how
well a teleoperated robot can navigate in a maze, or ISO 15817 [57] which defines
safety requirements for OEM remote controlled earth moving machinery. Neither of
these nor any other standard found apply to this prototype. Standards regarding
communication or how autonomous industrial vehicles communicate could not be
found. What was found is that when not using proprietary solutions the Robot
Operating System (ROS) is the most popular solution in the industry when creating
autonomous vehicles.
Building the system in a modular fashion with a node for every function makes it
39 |
Chalmers University of Technology | 6. Results
simple to exchange only the part that are specific to a certain vehicle or application.
For instance a node calculating a certain yaw angle of the vehicle to a steering wheel
angle can easily be exchanged to a node calculating the two inputs to a set of tracks
on an excavator. This modular architecture also makes it easier improve the system
by upgrading single nodes and makes it more stable since if one node crashes, it
can be restarted while the rest of the system keeps running. However, building the
system with many nodes can add significant performance overhead to the system
since the nodes have to be synchronized and communicate with each other with
different messages types.
6.2 Autonomous synchronization
The vehicle has two modes of control, manual and autonomous and can be set in a
stopped mode. These states together with their transitions can be seen Figure 6.1.
Initially the vehicle is in stopped mode and from there autonomous (transition a) or
manual control (transition b) can be set. When in manual control the operator has
full control over the vehicle. The auto-brake system from the autonomous driving
system is still active, so if the operator is on its way to collide with something the
vehicle will stop. This can be overridden if for instance a LiDAR sensor is broken
and giving false readings that makes the truck stop. These states are Manual Safe,
and Manual Unsafe in Figure 6.1, with the transitions h and i. Similarly there is
Autonomous Safe which is autonomous control with auto-brake for obstacles and
Autonomous Unsafe that does not brake automatically. This state is never used
and therefore forbidden. When stopped in the manual modes, stop mode can be
entered via transition c or j. If the truck is not stopped when requesting stop mode
manual mode will be entered via transition e or i. To start autonomous navigation
Auto-brake No auto-brake
e
h
MS MU
i
b
c
g start Stop j
a
d
f
AS AU
Figure 6.1: A state diagram showing the modes of control and autonomous syn-
chronization. The states are Manual Safe, Manual Unsafe, Autonomous Safe,
Autonomous Unsafe and Stop. The transitions are described in 6.2 - Autonomous
synchronization
40
launaM
suomonotuA |
Chalmers University of Technology | 6. Results
a path is chosen from the existing pre-recorded ones. The truck needs to be driven
to the start of the path and stopped in order to go to Autonomous Safe state via
transition a. If the truck is not aligned correctly it cannot enter autonomous mode
and it will stay in stopped mode via transition f.
The system can send a request for manual control while driving autonomously. The
vehicle will then come to a stop and then switch to manual control for a safe hand
over, this is transition d and b. This can be overridden if the the operator notices
an emergency and has to take control immediately to prevent an accident as seen in
transition g. To resume the autonomous navigation after manual control the truck
is driven onto the current path and stopped again (transition c), and a request can
be sent to the system to regain autonomous control (transition a). Similarly as
when starting an autonomous task the truck has to be aligned correctly. When the
autonomous task has ended the vehicle will stop, transition (d) to stopped mode
and wait for new commands.
6.3 Evaluation
The evaluation is performed inside the simulation environment described in 5.5 -
Gazebo Simulation using the predefined course. Different support functions, two
types of controls, the impact of varying amounts of latency and frame rates are
tested by letting a number of test subjects drive the course. They where timed,
their behaviour was observed and afterwards they where interviewed. The system
requirements specified in chapter 4 - System Requirements are verified to assure that
the system and simulation is suitable for this evaluation. The results can be seen in
Table 6.1.
As can be seen most of the requirements are fulfilled apart from that no gearbox
control nor handbrake is implemented in the simulation model. Also, indication
of the attitude of the vehicle has not been implemented due to that the tests are
performed on a flat surface.
6.3.1 Driving Experience and Support Functions
Running the simulation using the predefined cone-track with all the supporting
functions switched on has shown to work well. The natural choice is to use the
stitched camera image most of the time while driving the vehicle. But when driving
through narrow corners and close to obstacles the support of maps and proximity
sensors helps to inform about the surroundings for precision driving. Turning off the
support functions and only using the camera feedback works but causes the operator
to slow down slightly in order to pan around in the 360◦ video to get an overview
of the vehicle placement.
Generally, users kept the video feed set straight forward and only panned around
when reversing or if in a tight passage when the map with the distance indication
was missing. The benefits of the stitched 360◦ video feed compared to a number of
fixed cameras in strategic places that can be toggled between is not obvious. The
41 |
Chalmers University of Technology | 6. Results
Table 6.1: Requirements on the system for teleoperation, priority, verification with
results
Criteria Value Variable Priority Verification Fulfilled
Autonomous synchronization
Manualtakeoverfromautonomous 1 S Yes
Resumeautonomousaftermanual 1 S Yes
Autonomousstart Frompathstart 1 S Yes
Anywhereonpath 2 S No
Autonomous tasks
Recordnewpaths
inteleoperationmode 2 S Yes
Communication link
Latency Max20ms Yes 1 L Yes
Datacapacity Min17Mbit/s Yes 1 C Yes
Orientation Map
Fixedmapandrotatingvehicle On/off 2 S Yes
Rotatingmapandfixedvehicle On/off 2 S Yes
RepresentationofLiDARdata On/off 2 S Yes
Sensor data presentation
Speedometer Visible 1 S Yes
Vehicleattitude Visibleatdanger 3 S No
Distancetoobstacles Visiblewhenclose 2 S Yes
Proximitywarning Visiblewhenclose On/off 3 S No
Teleoperation
Speedlimit 30km/h 1 S&T Yes
Desiredsteeringangle 1 S Yes
Desiredacceleration 1 S Yes
Desiredbreaking 1 S Yes
Gearboxcontrol 2 S No
Parkingbrakecontrol 2 S No
Controltypes Steeringwheel, 1 S Yes
Gamepad,Joystick 2 S Yes,No
Video
Latency max500ms Yes 1 I Yes
Framerate min15FPS Yes 1 I Yes
Fieldofview 360◦ 1 S Yes
Imagequality Roadsign,15metres Yes 2 T Yes
T=Livetest,S=Verifyinsimulation,I=Implementmeter,L=Measurewithping,C=Measurewithiperf
advantage of this technology is probably much greater if combined with a HMD to
create a more virtual reality like cockpit.
When using only map when driving, the operator tends to lower the speed driving
around the course. The rotating map appears to be more convenient since the steer-
ing inputs will always be the same when turning. In the fixed map with the vehicle
rotating the operator rotates the map in the mind and sometimes left becomes right
and vice versa. This result has also been found in earlier studies [48]. When using
the fixed map, cutting corners were more frequent causing more cones being hit than
using the rotating map, even at lower speeds.
42 |
Chalmers University of Technology | 6. Results
The actual size of the vehicle was difficult to perceive using only the cameras, es-
pecially the width of the vehicle driving in narrow roads. Since the cameras are
mounted on the cab, the actual truck is not visible when driving. Therefore two
lines are introduced in the front camera image to point out the width of the vehicle.
If obstacles are outside of these lines, there will be no impacts.
Using a gamepad for control input the test results depends on the experience of
the driver. A driver that has played a lot of video games can skillfully control the
truck with the gamepad while an inexperienced driver tends to often use the joystick
inputs at full strength. Using a steering wheel, drivers tended to use the controls in
a more conservative manner leading to more precise control. Also using the wheel
drivers did not expect the steering to react immediately as was the case with the
game pad. This is believed to be because moving the joystick from full left to full
right only takes a split second, as with the steering wheel it takes around a second,
and the actual wheels of the truck more than so.
Findings have come to that the driver adopts to different scenarios and after some
practice the different support features tends to be less of use. The driving speed is
also increased after a few laps around the course since the driver gets used to the
controls and starts to learn the course. For driving longer distances the camera view
isbeneficialoverjustusingthemapsincespeedishigher. Howeverjustmaneuvering
aroundanobstacletothencontinueautonomousdriving,amapwithrangedetection
is sufficient to handle the task. Since the LiDAR sensors only measure in a 2D plane
and has a range of 20 metres, relying only on the predefined maps and sensors can
be dangerous. Small objects that does not reach up to the sensors can not be seen,
for instance a fallen cone. Driving in a tunnel where the walls are not smooth,
the LiDAR sensors may detect an indentation and therefore sense that the tunnel is
widerthanitactuallyis. Thereforetheusageofseveraldifferentsensorsandsupport
functions and letting the operator interpret and combine these are safer.
6.3.2 Impact of Latency
Tests show that for latencies smaller than around 300 ms the drivers can compensate
for the latency and there is not much change in efficiency and control. As can be
seen in Figure 6.2 the effect is 18 % increase of completion time around the course
when introducing 250 ms latency. As the latency reaches above 300 ms, drivers
tend to control by issuing an input and then waiting for the effect until the next
input is issued, known as stop-and-wait behaviour. This can be seen as a jump
in Figure 6.2 between 250 and 500 ms. With 500 ms latency the completion time
increased with 47 % and with 58 % at 750 ms latency. The degree of the stop-
and-wait control increases with the amount of latency as well. During the tests the
vehicle was controllable up to 1000 ms in delay, with higher latency nobody could
complete the course. It was noticeable that driving in constant curvature corners
was easier than in narrow straights since it was difficult to keep the vehicle driving
in a straight line. The driving tended to be "snake-like" and the amplitude of the
oscillations increased with latency since the driver tends to overcompensate steering
input.
43 |
Chalmers University of Technology | 6. Results
60
40
20
0
0 250 500 750
Latency [ms]
Figure 6.2: Completion time increase in percent due to higher latency.
When latencies increased the driver tended to drive more slowly through the course
since the difficulty increased. This lead to that the there were few violations when
the latency increased until a point where it got undrivable above 1000 ms. One of
the more surprising discoveries was that when the latency increased the speedometer
wouldaidthedrivingsincetheperceptionofthecurrentspeedwaslostwhenlatency
was introduced. By using the knowledge of the speed the right amount of steering
andthrottle/brakeinputcouldbeappliedtothevehicletocompletethecourse.
Because the stitched image is created using the computer in the vehicle and only the
part of the image the operator looks at is sent back, the latency affects the controls
of this as well. This made it very hard to pan precisely in the image, and a majority
of the test subjects found it harder to control the camera angle then to control the
vehicle at large latencies.
The cameras capture images at 25 frames per second and by lowering the frame
rate, the tests have shown that the controllability of the vehicle does not decrease
with frame rate as long as the frame rate stays over 10 FPS. However during tests
with low frame rate drivers report getting more mentally exhausted and need to
focus more to achieve the same results as with a higher frame rate. The distance
the vehicle travels between two frames for a acceptable frame rates is significantly
lower then the distance traveled before the user can observe it due to acceptable
latency. This can be seen in Table 6.2, for instance when the speed is constant at
30 km/h and the frame rate is as low as 10 FPS the distance is reasonably small
(below one meter) compared to a small latency of 250 ms where the truck is 2.08
meters ahead of the video stream.
The tests subjects preferred driving with lower frame rate compared to larger laten-
cies. Due to that the communication link cannot transfer the required amount of
data, lowering the frame rate could be one way to keep latency low and consequently
driveability higher.
44
]%[
esaercni
emit
noitelpmoC |
Chalmers University of Technology | 6. Results
Table 6.2: Traveling distance between video frames and at different latencies at
30 km/h.
FPS Distance [m] Latency [ms] Distance [m]
25 0.33 250 2.08
20 0.42 500 4.17
15 0.55 750 6.23
10 0.83 1000 8.33
5 1.16
6.4 Results Summary
The research questions stated in section 1.2 - Main Research Questions are here
answered with summirized answers referring to the rest of the paper.
How shall camera images, maps and sensor data be presented in order to
maximize the safety and efficiency of the operation? For this application
it was found that the 360◦ video was not utilized to its full potential, see 6.3.1 -
Driving Experience and Support Functions. Also a rotating map was preferred to
a fixed map with a rotating vehicle. The LiDAR drawn in the map described in
section 5.3.3 - Maps and section 5.3.5.1 - Light Detection And Ranging, was found
to work well.
At what level does the operator control the vehicle? As if sitting inside or
are more high level commands (i.e. "Go to unloading location") issued?
How do delays in the communication channel affect the choice of control?
Because of the given implementation of the autonomous functions more high level
commands could not be tested. However this is discussed in section 7 - Conclusion
& Future Work. It was found in this application that when driving manually 300
ms seconds was an acceptable latency. After this the operation became less fluent,
see section 6.3.2 - Impact of Latency.
Are there existing standards for remote control of autonomous vehicles?
There exists standards relevant to teleoperation and autonomous control, mostly
about testing methods which does not apply to this project. No standards for
communication was found, but one of the proposed use cases for the fifth generation
(5G) wireless systems is communication with and between autonomous vehicles, see
section 4.8.3 - 5th Generation Wireless Systems . More standards are discussed in
section 6.1 - Standards and System Layout.
How can the system be scalable to a variety of different sensors depend-
ing on application, and what are the requirements of the communication
link for different types of sensors? By using a modular design where differ-
ent functions can be added or removed depending on vehicle and application. In
this application ROS has been used (see section 5.2 - Framework - Robot Operating
45 |
Chalmers University of Technology | 7
Conclusion & Future Work
In this thesis a prototype system for teleoperation has been developed, implemented
and evaluated for a normally autonomous vehicle. Instead of the normal procedure
of first remote controlling the vehicle, and gradually letting it perform autonomous
functions, teleoperation has been added afterwards. This has given us the opportu-
nity to design a system with manual takeover from autonomous control as primary
use. Since the autonomous functions were present, the autonomous/manual syn-
chronization was built around this system and its limitations. Since all autonomous
functions are pre-recorded it is simple to return to the current autonomous task
after a manual intervention because the path is always known. In a system where
dynamic path planning is done there is room to create a more extensive manual
intervention system. For instance marking preferred areas to drive or areas to avoid
or drive around. This opens for lots of possibilities where the truck can be manu-
ally controlled in different ways, but not necessarily manually driven. It also makes
the synchronization between manual and autonomous mode more complex because
unlike this case it is not clear at all times what actually controls the vehicle.
Another simplifying factor in this application is that the paths do not overlap each
other. Therefore it is always clear where in the path it is desired to resume. If
the system is implemented on, for instance, an autonomous excavator, a recorded
path of the bucket will most probably overlap itself many times. Using this resume
approach would then yield a problem of where in the path the user wants to resume
if placing the bucket in a place where multiple segments of the path meets.
Theautonomousvehiclehas extrasensorsfornavigationandobstacle detection such
as LiDARs and GNSS. In addition, cameras are added for a surround view of the
vehicle and stitched together to a full 360◦ view that the operator can pan in. On
top of the video stream maps, offset to chosen autonomous path and the speed of the
vehicle is overlaid. In this particular application, the usage of full camera surround
view has not been utilized since the truck is mostly driven forward. One forward
angled and one reverse angled camera would have been sufficient. However, this
may not be the case when operating, for example, an excavator or a forest harvester
which is often stationary and the work area is all around the vehicle. It would be
interesting to use a head-mounted display with the camera surround view which we
believe would utilize it better. It would allow for the driver to actually look around
and mimic sitting inside the vehicle. In such case more cameras around the vehicle
and not just the on cab would be beneficial to get a better 360◦ view.
One of the major difficulties in remote control and teleoperation is latency. Both in
47 |
Chalmers University of Technology | 7. Conclusion & Future Work
the initial literature study and in our evaluation it was found that 1000 ms seems
to be the upper limit for operating a vehicle safely. However we believe that this
is very application specific. Depending on the speed of movement and precision of
the vehicle as well as the operational space, latencies are of differing importance. A
large tanker ship on open sea or flying surveillance drones can handle higher delay
times than an excavator or a mining truck in a narrow tunnel with preserved control.
If latencies are too high for manual diving, it would be interesting to evaluate small
commands of higher level manual control such as "Reverse 20 meters".
It was also found that having small latencies rather then high frame rate was pre-
ferred. Lowering the frame rate and image quality would keep latencies low. An
option to set these or by automatically analysing the connection and adjusting ac-
cordingly would probably benefit a system like this. Further investigation on haptic
feedback in controls would be interesting if it is applicable in this type teleoperation.
This requires though that latencies are kept small for it to function and actually aid
the driver when in manual control.
The next major step with proposed system is to test it on a real construction truck
to verify that the results from the simulation corresponds to reality.
48 |
Chalmers University of Technology | Modeling and control of a crushing circuit for platinum concentration
Time dynamic modeling and MPC control of a tertiary comminution circuit
MARCUS JOHANSSON
Department of Product and Production development
Chalmers University of Technology
Abstract
In the platinum rich country of South Africa, the Brittish and South African reg-
istered company Anglo American operates a platinum mine, this specific platinum
mine, the Mogalakwena mine is the worlds largest platinum mine. The blasted
ore from the mine pit is processed through a series of crushing and milling stages.
This master’s thesis work have aimed to time dynamically model and control one
of these stages, namely the tertiary crushing stage. This circuit includes an HPGR
crusher closed with screens. A time dynamic model of the crusher and the circuit
has been built. The tertiary circuit have thereafter been calibrated and validated
in this work. A simulink model of the process has been built and used for testing
the performance of the circuit, using both the current control setup and a newly
developled MPC controller utilizing the FORCES Pro solver in MATLAB simulink.
The simulations indicate a potential upside in circuit performance to be achieved
either by the change of screen decks or introducing a new supervisory controller and
increasing the allowed tonnages.
Keywords: HPGR, Comminution, time dynamic modeling, MPC, Process control
v |
Chalmers University of Technology | 1
Introduction
Practically all metals used in processes and products today have once been mined
and refined, this process takes place in an industry known as minerals processing.
Differentmetalsarefoundindifferentoresandinvariousconcentrations. Inorderto
harvest the precious metals, the blasted ore needs to be reduced in size and gaugue
particlesremoved toincreasetheconcentrationofthevaluablemetal. Theindustrial
process chain for size reduction and concentration is typical of a flow based process
industry, where each machine used have different capabilities and are constrained in
different ways. This process is costly and in many cases requires processing of large
tonnages.
Platinum grouped metals, for short PGM’s includes the following metals; ruthe-
nium, rhodium, palladium, osmium, iridium, and platinum, which are all noble
metals. The largest known reserve of PGM’s is located in the Bushveld Complex
in the Limpopo Province in South Africa [16]. The Bushveld complex consist of a
three main areas, western, eastern and northern Bushveld, in which the ore is high
in platinum concentration. Going north from the town of Mokopane on the northern
Bushveld the Platreef is located, a 10-400m thick stream in the ground that holds
platinum group metals [16]. The ore body in this area where the platinum is found
is sulfur rice and the platinum is said to be contained in the grain boundaries [16].
The concentrator where this work has been carried out lies on the Platreef belt
in the Municipal of Mogalakwena. At this location the platinum rich ore is found
close enough to the surface making it possible to mine with an open pit, instead
of being underground. The open pit mine, Mogalakwena is fully owned by Anglo
American and is the largest open pit platinum mine in the world as well as the
flagship platinum operation for Anglo American [1]. The platinum concentration
the at Mogalakwena is relatively high and combined with the open pit this improves
the profitability of mining here. The Mogalakwena complex has two concentrator
plants, the South plant and the North plant.
The master’s thesis work presented here is focused on processing of platinum ore to
refine platinum from the ore body. The background, objective and structure of the
work will be presented in this chapter. A general view of the subject is presented in
the background chapter.
1 |
Chalmers University of Technology | 1. Introduction
1.1 Context
In process industry where large amounts of materials has to go through the process
every hour equipment of high standards and of adequate size is required. The
Mogalakwena North Concentrator is the largest and one of the most important for
Anglo American Platinum and the success of this top asset is very important for
the company [1]. The performance of the plant is top priority and downtime is
very costly. The plant is designed in a single stream fashion, implying that there
are many sections of the plant where every piece of ore has to go through. There
are advantages and disadvantages to this type of plant structure, one advantage is
the usage of large scale efficient equipment, on the other hand a disadvantage is
the sensitivity to equipment failure. The later has been addressed by large silos in
between each section of the plant, enabling some buffer time for the neighboring
section in the case of a breakdown.
Considering the above facts a way to test new circuit configurations and control
strategies and study the response over time without impacting the production can
be a very useful tool. A tool of this sort was built by Asbjörnsson [2] for the
secondary crushing circuit, section 405 at Mogalakwena North and the usage of it
has been successful for Anglo American Platinum, especially on the control side.
1.2 Objectives/ Problem
Anglo American suggested that a similar model to the one previously built for sec-
tion 405 to be developed for the next section, the HPGR- circuit , shown in Figure
1.1. The model should represent the circuit in its current configuration and fulfill
the below listed requirements.
• Model prediction ± 10 % of the logged plant data
• Inclusion of silos before and after HPGR-section
• Build the local control loops used today
The sub circuit 406 does not today have any advanced controller supervising its
operation, however there is a seperate setpoint calculation for the screen bin PID’s
to make sure the screen bin is not in danger of becoming full and the product belt for
the HPGR has to stop. An initial exploration of applying model predictive control
to the simulation models was also wished for.
1.3 Research questions
Apart from developing and calibrating the model the following questions will be
answered in this thesis.
1. What type of model characteristics are required for time dynamic simulations?
2 |
Chalmers University of Technology | 1. Introduction
Sub circuit input:
HPGR feed 1450 tph
0-45 Screened at 40/52
406
BN-002
406
SF-001 BN4 -0 06 0 1 406-FE005 406-FE006
406-FE004 406-FE003
Oversize 10-55 Oversize 10-55
mm 406-SC001 406-SC002 mm
406-CV002
406-WT416
Undersize -10 Undersize -10
406-FE001 406-FE002 mm mm
406-HPGR
406-CV005
406-WT010B 406-CV006
406-WT402
406-CV007
406-WT433
407
SF-001
Sub circuit Output:
1100 tph
Figure 1.1: Schematic view of section 406 at Mogalakwena North concentrator.
The read markers are the positions of mass flow sensors throughout the section.
2. How can large variation and uncertainties in incoming feed and machine wear
be handled in order to increase robustness of control system performance?
3. How can model predictive control be applied to a crushing circuit simulation?
Research Question 1 aims to be answered by literature review and during the devel-
opment of the new model, the confirmation if the new model works is given by the
validation. Research Question 2 aims to be answered by the use of simulations with
the advanced controller and the development of new controllers. Research Question
3 will be answered by the experience gained by applying model predictive control
to the simulation model.
1.4 Research approach
The research approached used in this work is based on a clear definition of the
problem, then followed by a literature review to establish a view of what has been
done previously in the field. When the current state of the research regarding the
problem has been established the list of things which need to be developed is clearer
and the work can be structured in an efficient manner.
The work includes a big dependency, which is that the calibrated model needs to be
availabletotestthecontroller,thecontrollercanbedevelopedbeforethen. However,
the model needs to be calibrated before any valid conclusions can be drawn from
the use of the controller.
3 |
Chalmers University of Technology | 2
Background
A brief description of the process at Mogalakwena North is described in this chapter,
followed by an introduction of time dynamic modeling, finally giving some back-
ground on the suggested control strategies to be used in this work along with the
time dynamic model.
2.1 Crushing at Mogalakwena North
Minerals and metals are found by exploration of new lands and where rock core
samples are taken and analyzed. If minerals of value are found extraction can start.
The process of extraction usually starts with blasting, regardless if it is open pit
or under ground. The blasted ore body usually consists of a wide size range of
particles. To extract the metallic minerals, the ore has to be concentrated. De-
pending on the concentration in the original ore the concentration process varies
slightly. At Mogalakwena, and at platinum mines, in general, the concentration of
platinum is very low. When a newly commissioned mine opened on the farm next to
the Mogalakwena mine complex the platinum concentration was estimated to 1.889
grams of platinum per metric ton of ore [35] in 2015. Platinum is found sprayed
in the ore body in very fine grains and requires a large reduction in size applying
fine grinding to an average particle size (by weight) of about 7 µm [10]. Achieving
this size reduction is a heavy job and requires large machines and energy. This
process is refereed to as comminution [37]. A typical plant is divided into dry and
wet processing, this thesis will only handle dry processing and therefore in this case
only include comminution. The topics described in 2.1.1,2.1.2 and 2.1.3 are referred
to as crushing, at Mogalakwena North the HPGR crusher is both a tertiary and
quaternary crusher and will be the focus of this work.
In Figure 2.1 the dry section is illustrated, block a) featuring the primary crusher
referred to as section 401, block b) the secondary crushers and section 405 and block
c), the HPGR circuit called section 406. Each of the blocks will be described more
in detail in this section.
2.1.1 Primary crushing
Primary crushing is done with a crusher that has a large intake and can handle rock
particles of sizes up to a couple of meters. At MNC a large gyratory crusher is used
to complete the first reduction step in open circuit. An open circuit is implying
5 |
Chalmers University of Technology | 2. Background
Primary
Secondary
crushing
crushing
a) b)
Tertiary
crushing
c)
Figure 2.1: Schematic view of the dry section at Mogalakwena North concentrator,
including the equipment and the weightometer sensors in red.
that there is no circulation of material back to the crusher. The ore is after that
transported on conveyors to the stockpile. The stockpile is pictured in Figure 2.2.
2.1.2 Secondary crushing
The secondary crushing of the ore is achieved with cone crushers. A cone crusher
can handle a large variety of feed sizes, in this case ranging from 360 mm and
down. At Mogalakwena North three cone crushers are used in a closed circuit with
screens. The first two crushers are crushing mainly fresh feed from the primary
crusher and the third crusher the circulated +55[mm] material. The product from
the cone crushers is then screened and transported to the HPGR-fresh feed silo. The
secondary crushing section is today controlled by a controller developed with the
help of the previously developed time dynamic simulation model by Asbjörnsson [2]
and validated by Brown and Steyn [27]. The secondary crusher lineup consists of
a three Hydrocone H8000 made by Sandvik. The crushers have a large capacity,
and the current operating point of the plant allows for the use of two crushers at
the time. This set up is beneficial for the plant and the crushers, if crusher one
and three are used, this will create a wall between the crushers and separates the
circulating load from the fresh feed. The secondary circuit is today supplying the
HPGR section, box c) in Figure 2.1, with a material screened at 55 by 55 mm screen
decks.
6 |
Chalmers University of Technology | 2. Background
2.1.3 Tertiary crushing
The final sub circuit on the dry side of the process is the HPGR section also referred
to as section 406. This section is modeled, simulated and controlled in this work.
The section houses a ThyssenKrupp Polycom 16/22 high pressure grinding rolls
crusher or HPGR crusher for short. The product from the secondary crusher circuit
is stored in a silo and is fed to the HPGR-bin where it is combined with the oversize
from the screens. The HPGR bin is equipped with two variable speed drive feeders
which are feeding onto a variable speed belt. The HPGR crusher should be running
in choke fed conditions, implying the there should always be material in the chute
above the crusher. The chute is hanging in load cells, which measures the weight of
the chute. The chute is pictured in Figure 2.4. The HPGR has been upgraded from
the original commissioning of the plant, now having larger motors driving the rolls,
which also features variable speed drives on the rollers. The original set up of the
HPGR circuit is described by Rule [29] in a paper written after the commissioning of
the plant in 2008. The product of the HPGR is transported to the tertiary screens
with apertures of 10 by 10 mm to date. The oversize is conveyed back to the HPGR
bin, and the passing material goes into two silos supplying material into the wet
process and the primary mills.
The HPGR crusher crushes the ore by passing it between two pre-tensioned rollers,
in this case, pressurized with 160 bar hydraulic pressure with active pressure and
dampening control. Each side of the crusher has two plunger cylinders, and a
controller is actively differentiating the pressure between the two sides to keep the
gap constant between the two rollers over the entire width of the rolls. The left side
(seen from above) of the machine the hydraulic cylinders are shown in Figure 2.5.
The hydraulic system is protected by two nitrogen accumulators, one on each side
of the machine, these are installed to minimize the effect of high-pressure spikes,
which can appear in hydraulic systems. The hydraulic system is also equipped with
an active dampening controller to protect the machine in case of tramp metal or
overload.
8 |
Chalmers University of Technology | 2. Background
2.2 Time dynamic modeling
Time dynamic modeling aims to describe how a technical system changes over time.
It ranges from being the solution to the differential equation that describes how
an object free falls to the state of a process industry. The applications of dynamic
modeling are many and are usually related to, physics, chemistry, control, mathe-
matics or other technical fields where there is a need to describe time dependent
and varying processes. In comminution the topic is introduced in [31], outlining
the basics of modeling comminution circuits, much similar to the approach by [2].
The first dynamic models and approaches of describing comminution over time have
been done within milling, to mention a few: Liu [18], Salazar [30] and Rajamani [28]
where the more recent ones utilized a Simulink environment for the simulation as
suggested in [31] and by Asbjörnsson [2]. The modeling work done by Asbjörnsson
at Mogalakwena North on the secondary crushing circuit developed a few common
components which can be used for the HPGR-section as well, these include convey-
ors, screens, and bin-structure. The remaining component, the HPGR-crusher has
been developed in this work. State of the art in HPGR crusher modeling will be
reported upon in Section 3.1.5.
2.2.1 The HPGR crusher
The development of the HPGR crusher can be related to the work done by Klaus
Schönert on breakage mode of rock and applying his results [32], [33]. Schönert con-
cluded that single particle breakage is the most effective mode of breakage and the
second most effective breakage mode, regarding energy, is confined bed breakage.
To achieve the bed breakage mode, a conventional roller crusher was fitted with
hydraulic cylinders to increase the pre-tensioning between the rollers.
The HPGR first established itself as a crusher in the cement industry in the 1990s,
later spreading into minerals processing applications [22]. The potential of the
crusher in the minerals processing industry have been highlighted by multiple au-
thors, Rule [29], Ntsele [24] and Powell [25]. A schematic view of the crusher is
shown in Figure 2.6. The crushed ore is compressed to the extent that the for-
mation of cakes appears in the product, these cakes are brittle and include micro
cracked rocks [37], that in most cases will separate into very fine particles once
shaken on a screen or from the fall into the bin in hard rock application similar to
MNC [22].
2.3 Control models
Large scale processes with machines that have different operational windows and
parameters need control to function correctly. Control is implemented for multiple
reasons, for example to stabilize and achieve a smooth operation, to protect the
equipmentandtoensuretheprocessproductmaintainsacertainquality. Stabilizing
controlisimplementedatMNCasalayerofsingleinputsingleoutput(SISO)control
loops running on a programmable logic controller (PLC) . This computer handles
the protective part of the control and allows for basic stable operation, discrete logic
11 |
Chalmers University of Technology | 2. Background
Figure 2.6: A drawing of an HPGR crusher based on the FLSmidth design, by J.
Quist [26]
and start up sequences are also included in this layer. The common practice in the
industry is described by Tatjewski [36]. On top of this basic control layer, there is
a possibility to add more advanced controllers, typical supplying the set points, or
references to the basic layer.
The types of controllers used for optimizing and balancing on a higher level than
the SISO loops are usually grouped into advanced controllers. One of the best
established and very powerful type of controller, in process control, is the model
predictive control scheme or MPC for short. The predecessor to MPC was developed
in the Petrochemical industry at the end of the 1970s. This type of controller is
called dynamic matrix control (DMC) controller and uses step response models to
predict the future state of the process. Cutler and Ramaker [11] introduced this
scheme at the Shell refinery in Houston, Texas. The basic idea was to use the step
response models to predict the future of the process and choose the control signals or
set points optimally. The first approach was slightly primitive and couldn’t handle
constraints very well. Increases in computational power have successively increased
the capabilities of the scheme and to approach more advanced problems. DMC
evolved to generalized predictive control (GPC) and finally to MPC which is the
common form today. MPC software is readily available for businesses to buy and
applyto theirprocesses. Asummary ofMPC controller development andanoutlook
into the future is given by Morari [21]. Outlining the future regarding more complex
models,constrainthandling,androbustness. Modelpredictivecontrolhaswithmore
computationalpowertheabilitytosolvecontrolproblemswithverydemandingtime
requirements.
12 |
Chalmers University of Technology | 3
Methods
The following chapter will be divided into modeling and control where the modeling
techniques will be discussed first then followed by the controller development.
The task of modeling section 406 can be divided into two different types of work,
modeling related and calibration or tuning related. The modeling work includes the
following:
• HPGR crusher model
• Silo models
• Bin models
• Conveyor
The above has to be developed from the ground or largely modified from previous
work by Asbjörnsson. The most important of the above is the HPGR model, as
described in Section 2.1.3.
The second task, the calibration of the model needs to be done to make sure the
model corresponds to the process itself. Calibration includes: Balancing the mass
flow in and out, Particle size prediction and bin levels within the circuit.
3.1 Circuit modeling
The first aim of the master thesis project was as described in Section 1.2 to model
the HPGR section at the Mogalakwena North Concentrator. The approach to how
this is done is described in this section along with explanations to how the material
handling equipment and screens are modeled.
A set of requirements for the final model was also formulated by Anglo American
Platinum and are listed in Section 1.2.
3.1.1 Prerequisites
In each sampling instant at every point, the circuit has a certain mass flow which
has a set of properties and a particle size distribution. To track particle size, mass
flow and properties of the material, a data structure is needed to specify what
information should follow with each time step of the model. The introduction of the
structure used in this work was done by Asbjörnsson, [2]. This was done to facilitate
the connection of the model of the secondary crushers to the one being developed
15 |
Chalmers University of Technology | 3. Methods
in this work. The modeling work was therefore done in MATLAB Simulink of
compatibility reasons.
3.1.2 Conveyors and feeders
Section 406 has two different types of conveyors and one type of feeders. The
two types of conveyors are fixed speed conveyor and variable speed conveyor. All
conveyors are fixed speed except 406CV002, which is the conveyor feeding into the
HPGR-chute. This conveyor can speed up or down depending on the weight of the
chute to ensure the HPGR crusher stays choke fed. All feeders are equipped with
variable speed drives to adjust the output of the feeder. The conveyor models are
described by Asbjörnsson in [4] and [2].
A regular fixed speed conveyor introduces a delay in the process; this is modeled as
a pure delay, using standard Simulink blocks for delaying a signal. The time delay
can be expressed with Equation 3.1
L
conveyor
t = (3.1)
delay
v
conveyor
where L and v are the length and speed of the conveyor.
The variable speed conveyor is modeled with a state space that keeps track of the
material on the conveyor as a function of conveyor length. This conveyor model
allows for stopping the conveyor without losing any mass, which is the case with the
fixed speed conveyor.
On section 406 there are six variable belt feeders, pulling material out off the bins
and silos on the section. These are controlled with PID-loops supplying the feeder
with a percentage of its maximum belt speed. Since all feeders have weightometers
close tothem, the corresponding materialbeing fed to theprocess for a specificvalue
of the feeder control signal can be plotted. The output of the feeders is approxi-
mated to be linear with belt speed. A straight line was fitted for each of the feeders.
In Figure 3.1 the relationship between the feeders 406FE001 and 406FE002 com-
bined against the mass flow recorded on weightometer 406WIT010B. The same
method was used for feeder 406FE003, 406FE004, 406FE005 and 406FE006 as
an initial measure. The feeder output model is based on Equation 3.2. Since the
feeder model includes two parameters to be tuned, the correct response has to be
obtained from more than a single operating point to make sure the rate and the
offset are in parity with the real process. Based on this reasoning the feeder rates
were estimated from all training datasets and averaged. The offset term in the linear
equation was kept fixed to an initial guess and the rate used to calculate a rough es-
timate. These averages were then used in the first iteration of the model calibration.
y = kx+m (3.2)
The output y of the feeder was formulated in the form of Equation 3.2. The feeder
model includes no time delay, even if there is some delay in the feeder, especially
during startups and if the bin or silo have been empty.
16 |
Chalmers University of Technology | 3. Methods
HPGR predicted capacity
2000
Process data
1800 Fitted response
1600
1400
1200
1000
800
600
400 y=14.4197x+175.4223
200
0
0 50 100 150
Feeder rate % [-]
Figure 3.1: Example of a feeder rate in % of maximum belt speed plotted against
feeder output for a 8h dataset, where the red line is the least squares fit of a straight
line to the data.
A possible extension to this model would be to include a non-linear term saturating
the output which can be seen in some instances, as in Figure 3.1 and implement a
check if there is enough material in the bin to utilize the entire feeder capacity. This
was implemented on the inflow to the circuit, FE001 and FE002.
3.1.3 Silos and bins
The circuit 406 includes three different bins and silos. Modeling of the two smaller
bins has been based on Asbjörnsson’s bin model presented in [3]. Modeling of each
of the three different material storage containers will be described below.
3.1.3.1 Silo 406
The silo storing the secondary product is a 10 000 ton silo, and due to its size, it has
been modeled as a layered bin, as shown in Figure 3.2. The silo has been divided
into 100 layers, the material is mixed within each layer, resulting in one particle size
distribution, one set of properties and a total mass for each layer. The first material
to enter the bin is the first to exit, in other words, when the bottom layer has been
emptied the layer above will be used, the material is successively moved downwards
in the zone structure as indicated in Figure 3.2.
17
]HPT[
,wolfssaM |
Chalmers University of Technology | 3. Methods
m ,PSD , Properties
in in in
Level
PSD
i
Mass
i
Properties
i
zone
i
.
.
.
M PSD , Properties
out out out
Figure 3.2: Illustration of the structure used for the two silos.
3.1.3.2 HPGR feed bin and screen bin
The two smaller bins on section 406 are the HPGR feed bin and the screen bin,
these were modeled with a structure introduced by Asbjörnsson [3]. The two bins
are pictured in Figure 3.3, where they work with an active volume, illustrated by
the striped pattern. Both bins were assumed to have sections, these sections each
have a volume. The middle section receives the incoming material, and the two
outer sections are from where the material is withdrawn. Depending on the levels
in each section material is transfered between the sections. The transfer and when
have been partly calibrated, however, it is a very difficult task and have been second
to the mass flow calibration.
Theanglebetweentheeachofthesectionsnotedα iswhatdeterminesthetransfer,
1,2
if this angle is larger than the repose angle of the bulk material, transfer between
the sections is taking place. Equation 3.3 is the underlying calculation done to
determine the transfer, δy is the difference in height between the outer section and
the middle section and δx is the distance between the center of the bin and the
center of the feeder. This distance is constant since the feeder has a fixed position
in the bin. The two outer sections have been modeled with a nonlinear shape in the
bottom. They have therefore a shape, as illustrated in Figure 3.3 with a cone in the
bottom. This was a way to be able to empty and refill the bins fast, which can be
observed in the process data.
δy
α < a = tan−1( ) (3.3)
transfer 1,2
δx
In each bin, there are level sensors which measure the distance from the sensor to
the material level using an echo. These sensors are calibrated and have to be re-
calibrated over time. The readings are noisy and very sensitive to how the sensor
18 |
Chalmers University of Technology | 3. Methods
a) top
min
view c) top
view
min
min
min
min
level
b) Side
view mout mout d v) iS ei wde mout mout
Figure 3.3: Illustration of the bin structure used for the HPGR and the Screen
bin, The striped areas illustrate the active volumes, observe that the illustration is
not to scale.
is positioned and aimed. The active volume was estimated using process data,
however, appeared to vary depending on data set and not in an explainable manner.
Apart from the section structure, there is a global first in first out structure on both
bins. This structure works as described for the silos in section 3.1
3.1.4 Screens
The model of the two screens 406SC001 and 406SC002 have been the same model
as used when modeling was done for section 405 by Asbjörnsson [2]. The only
difference is that the aperture of the screens has been set to the size used today,
which is a 10 by 10 mm mesh of a polymer material. The screen model originates
partly from the work of Staffhammar [34] but have been adapted for use in time
dynamic simulations by Asbjörnsson.
3.1.5 High Pressure grinding rolls crusher
The heart of section 406 is the HPGR crusher. This crusher has been subjected
to study by many, a summary of the work done on HPGR modeling is given by
McIvor [20]. The models to date have been focused on particle size prediction and
throughput. Comminution modeling, have in general been focused on steady state
simulations and therefor models for the HPGR do not include dynamic components,
such as varying gap and roller speed. The closest to the dynamic response is the one
that can be observed in DEM-simulations by Barrios [7] and Quist [26]. Where they
both have utilized the possibility with the DEM software to feed the forces from
the particles to a model describing the hydraulic system. The results are very high
fidelity model responses, however, the DEM- calculations are too slow for process
simulations. The insights from DEM are very fruitful for the modeling exercise of a
19 |
Chalmers University of Technology | 3. Methods
comminution machine.
The most influential models of HPGR’s in the literature are based on the Austin
roller mill model, first developed for coal [5]. Morrell and Daniel [12], [23] and Ben-
zer [14], [8], have later developed this model with focus on particle size and capacity.
The models are more descriptive than predictive, and in a process simulation, pre-
dictive and fast models are what is required.
3.1.5.1 The model structure
The HPGR model used for the circuit modeling in this work is based on a new
approach, combining mechanistic crusher modeling based on Evertsson’s [15] cone
crusher model and Johansson’s [17] jaw crusher model.
In HPGR modeling, when targeting modeling of the dynamics, multiple approaches
have utilized a spring damper system to model the response from the hydraulic
system [6] and [26].
The process model developed for this purpose is aimed to capture the dynamics in
roller speed changes, pressure, and incoming feed size changes. The model structure
used can be seen in Figure 3.4.
Material Operational
Geomety
parameters parameters
Capacity
F (t-1) Dynamics Flow Capacity
roller
Pressure
Product
Feed size PSD
prediction
Figure 3.4: The model structure used in the HPGR block in the simulation model.
3.1.5.2 Crusher dynamics
The position of the roller is essential to estimate the throughput of the crusher. To
determine the floating roller’s position, a free body diagram is completed, and the
forcebalance inthehorizontaldirectioncan bestated. InFigure 3.5a)the freebody
diagram is drawn, where F is the force from the hydraulic system. The hydraulic
h
20 |
Chalmers University of Technology | 3. Methods
z
y
F
roller
ω
d x
F -ρ -kΔx
h
x
Q
roller
F hL F hR
a) b)
m
Figure 3.5: a) forces in the x-direction acting on the floating roller, b) a symmetic
pressure distribution resulting in a distributed load on the floating roller, seen from
above.
system is modeled to have a stiffness and a dampening effect. These forces have
been noted as well in the figure. The system is stiff and requires a sampling time
much smaller than the one used in the actual process model. The force component
from the stiffness is reset in each global sampling instance, implying that ∆x is set
to zero in each step in the process model’s global iteration, the velocity at the final
step is used as an initial condition in the next step. F is the force from the
roller
material because of the compression. The force balance is shown in Equation 3.4,
the equation describes the time varying motion of the floating roller, regarding the
position, velocity, and acceleration. Equation 3.4 was converted into a state space
system for use in the model. The sampling time of the roller equation was set to
400Hz in the discrete implementation in the model.
mx¨ = F +(−ρx˙)+(−k∆x)−F (3.4)
h roller
The component F is supplied externally as the hydraulic pressure, F is esti-
h roller
mated based on a discretization of the compression cycle and over the length of
the roller. The hypothesis is that at an angle α, noted in Figure 3.6, and below
the boundary condition between the roller and the material bed is assumed to be
no slip. The angle α is sectioned in smaller elements α0. The breakage is assumed
to be based on pure compression and the position where the material experiences
the no slip condition, the distance between the rollers is the distance B, the total
compression ratio can be expressed as a function of the operating gap. The relation
is shown in Equation 3.5
B −gap
C = (3.5)
ratio
B
21 |
Chalmers University of Technology | 3. Methods
B
r
roller
D p p h
α
zone
i
Δx
α'
gap
a)
b)
Figure 3.6: a) The zone structure of the HPGR model, b) the hydraulic cylinder
setup and the introduced spring damper component. p and D are the hydraulic
h p
pressure and the plunger cylinder diameter respectively.
The throughput of the crusher is calculated as the mass of each zone times the
number of zones to pass through the crusher per unit time. The mass of a zone is
modeled as the volume of the first zone times the bulk density of the material. If
the mass of each zone is saved during the process of compression and assuming no
material exits on the sides the total mass over time can be calculated with Equation
3.6
m˙ =
nzXones
m (gap,α0) (3.6)
zone1,j
j=1
From Equation 3.6 it should be noted that m is a function of the gap and the
zone1,j
angle α0 and the number of zones per unit time, n is a function of roller speed.
zones
The roller speed can be stepwise changed with the global simulation time but stays
constant with the smaller steps within the HPGR crusher module. This implies that
a re-sampling of the zone structure is done at each global sampling instance.
To determine the force from the material to the roller the feed material at MNC
was sampled and compressions test with a piston and die in a hydraulic compression
rig were done. The compression rig records the compression ratio along with force
during the test, and the results were fitted to an exponential function. The test
was done for three different widths of particle size distribution and two different
maximum particle sizes. An example of the output from a piston and die of the test
is shown in Figure 3.7.
From the test data a double exponential function can be fitted, the fitting was done
using MATLAB to fit a function that minimizes the error between the function and
22
S2 |
Chalmers University of Technology | 3. Methods
estimated to 45 tons. Quist has shown the force distribution along the roller using
DEM [26]. For this work, a second order polynomial was used to scale the force
along the roller to obtain the maximum force in the middle and lower at the edges.
The total response from the compression was tuned to correspond to a gap similar
to what the crusher uses in operation. Figure 3.5 b) shows the principle of the load
onto the roller, where the polynomial was used to shape the pressure distribution.
This loading condition is noted in the literature to vary depending on the HPGR
crusher geometry and then specifically if side plates are used. At MNC there are
plates on each of the sides of the rollers inhibiting material from escaping.
3.1.5.3 Particle size prediction
The particle size reduction in the crusher is modeled with a fixed reduction and only
respond to changes in feed size distribution. The reasoning behind this choice was
that the methods used in form conditioned crushing, for example in cone crushers,
the particle size distribution behaves very differently compared to in an HPGR. The
cone crusher uses a compression ratio based measure as input to the particle size
prediction [15]. The method was tested with a model calibrated for an aggregates
material, but it was not corresponding well enough to be used in this work. Other
methods used in the literature are population balance models which also include
many parameters and requires plant surveying. One population balance model by
Dundar [14] includes data for a platinum ore. The choice of proceeding with the
new model was basically due to simplicity and that the surveys of the plant from
2011 included three different tests with the HPGR and proving it difficult to be
conclusive on how to formulate a module to predict based on more inputs than the
feed.
The reduction used in the crusher for this model presented in Figure 3.8 The re-
duction step consists of a vector of values added to the cumulative particle size
distribution curve. This action is combined with logic to avoid the distribution to
grow larger than 100% as well as from obtaining a negative slope. It should be noted
that this model will only work for a narrow range of operation for the Mogalakwena
North HPGR crusher and does not aim to describe any other HPGR’s crushing
performance.
3.1.6 Model assembly
When all the components of the model are available and tested to be in error free
state, they can be assembled in Simulink. The main bus consisting of the data
structure described in Section 3.1.1 is connected between each component, and the
input signals are read from the workspace of MATLAB. Logging of signals was done
both by storing them to the workspace as well as graph windows in the model to
allow for visual monitoring while running the model. Parameters, such as conveyor
belt speeds, inflow feed size, and screen deck apertures were in this process also
assigned to the model. Initial testing and debugging were part of the process as
well.
24 |
Chalmers University of Technology | 3. Methods
HPGR particle size reduction
1
feed
0.9 product
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
10-2 10-1 100 101 102 103
Particle size [mm]
Figure 3.8: A reduction step of a feed and the resulting product for the reduction
used by the crusher.
3.1.7 Model calibration and validation
The final model has been calibrated against process data retrieved from the SCADA
system at the site. This process was very time consuming, even if the circuit is not
complex the effort required to achieve good enough correspondence for many simu-
lations with multiple dataset is large. In this section the work and methods used to
achieve the result of a calibrated model will be described.
The validation of the model is based on the work by Steyn and Brown [27] and in
summary, the weightometer readings are compared with the model prediction over
a number of different datasets. The performance measure of the model used was
a normalized root mean squared error, NRMSE, value and the same measure has
been used in this work. For each model run of 8 hours Equation 3.9 was used to
calculate the normalized error measure between plant data and model prediction.
For the calibration of section 406, three different datasets were used and a fourth set
for validation. The validation set was picked at random and never used for training
of the model.
r
P
(y−yˆ)2
n
R = (3.9)
NRMSE
y¯
Where y is the measurement from the actual plant, yˆ is the model prediction, and
n is the number of samples in the eight-hour simulations. This value has been 2881
for all simulations since the SCADA system samples the process every 10 seconds.
y¯ is the average value of the plant data for the time period. Calibration of other
measures than weightometers was done to some extent, focusing on level reading in
the two smaller bins, the HPGR-bin, and the screen bin.
25
]-[
gnissap
% |
Chalmers University of Technology | 3. Methods
To be able to simulate the process using real process data, the SCADA signals were
loaded into the model via the MATLAB workspace. It was possible to automate
this process which helped to speed up the initialization of the simulations. The
calibration used three different datasets and after all had been simulated the results
were compiled into a report, and the methods below were used to improve the result
for the next iteration of simulations.
The following steps were used to when approaching the task of model calibration
with process data. The calibration is an iterative process and the method is usually
developed slightly during the completion of the task, the list below resembles the
method used towards the end of the task.
1. Identifying three sets of data that the model can capture
2. Mass balance over time, making sure the each feeder output the right amount
of mass
3. Ifmorethanonefeederperweightsensormakesurethefeedersoperateequally
(a) a. If not determine the ratio between the feeders
4. Estimate the relative bin size or utilized bin size from process data. However,
this is just an indicator and depends on the operating point.
5. From all the data sets try to find conclusive trends and reasons to tune pa-
rameters.
6. Adjust bin sizes. The trends in the level sensor data from the real plant should
be visible.
7. Evaluate performance,
(a) If calibrated, proceed to validation else back to 1 and iterate.
26 |
Chalmers University of Technology | 3. Methods
3.2 Controller development
The controller development is split up into two parts, firstly to replicate all the local
SISO-loops acting on the circuit and to make sure they are stable. After that, an
MPC controller was developed and implemented in Simulink. The two parts are
discussed separately, however for the MPC controller to be tested the SISO control
layer needs to be in place.
3.2.1 SISO control layer
The current configuration of control for the 406 section consists of PID-loops and a
setpoint selector for the screen bin. The PID-loops are standard form PI-controllers,
where the set-points are supplied either from another loop or fixed. The set point
selector for the screen bin is providing set-points to the PID’s controllers controlling
thescreenfeeders. Thesetpointselectorhasnotbeenmodeledinthiswork, however,
in short, it is making sure the screen bin never becomes full in the case of a stop
of any of the belts 406CV004, 406CV005, 406CV006 and 406CV007. This is
to protect the crusher product belt from having to stop while in use and loaded.
Stopping a fully loaded belt may result in having to empty the entire belt manually
before being able to start it again. The effect of not including this controller in the
simulation model is discussed in Section 5.2. In Figure 3.9 the control loops are
illustrated.
Sub circuit input: 1450 tph
HPGR feed
0-45 Screened at 40/52
r
PID +-
406
BN-002
SF4 -0 06 0 1 r +- ratio PID BN40 -06 0 1 AP rC +- PID 406-FE005406-FE006 PID -+A rPC
406-FE003
r+-
PID
404 60 -C6 V-F 0E 00 204 Oversize 10-55 mm 406-SC001 406-SC002 Oversize 10-55 mm
+ 406-WT416
- PID 406-FE001 406-FE002 Undersize -10 mm Undersize -10 mm
PID
406-CV005
406-CV001 406-HPGR
406-WT010B + r- 406-WT402 406-CV006
406-CV007
406-WT433
407
SF-001
Sub circuit Output:
1100 tph
Figure 3.9: Schematic view of the current control setup used at section 406, con-
sisting of PID-loops and an advanced controller on the screen feeders.
27 |
Chalmers University of Technology | 3. Methods
The structure is set up with the following goals; to make sure the HPGR is choke
fed, no overfilling of bins and that material is always available in the HPGR bin.
There is a range of slow and fast control loops on section 406. The PI control loops
in use are listed below.
• Feeders from Silo
• Setpoint for Silo feeders
• Feeders from HPGR-bin
• belt speed of HPGR chute belt
• Roller speed of HPGR
• Screen feeders
The feeders that withdraw material from the HPGR bin does that by maintaining
a fixed level on the variable speed conveyor 406CV002. There is a radar sensor
above the conveyor that measures the height of the material bed on the conveyor.
To be able to do this in the simulation a model describing the filling of the conveyor
has to be developed. The derivation follows below.
Every second material is withdrawn from the bin and placed on the conveyor. Since
the conveyor speed is updated once a second, the speed during a second is assumed
to be constant. If the mass placed onto the conveyor is divided by the bulk density
and the distance the conveyor has traveled in one second, the cross sectional area
of the conveyor bed is obtained. The conveyor is supported by five rolls which form
an arc, the radius of the arc has been estimated to 1.52[m] and assuming that the
conveyor fills the segment of a circle and creates a 30° angled triangle on top, as
illustrated in Figure 3.10. The area of a the circle segment can be calculated with
Equation 3.10, 3.11 and 3.12 from Björks [9].
1
A = (br−s(r−h)) (3.10)
2
q
s = 2 h(2r−h) (3.11)
s
sinα = (3.12)
2r
The notation is the same as in Figure 3.10. If Equation 3.10, 3.11 and 3.12 are
combined and the area of the triangle is added Equation 3.13 can be stated. This
equation is nonlinear and in order to solve for the height h an iterative method was
used to for arriving at a value area close to the one calculated based on the mass
and the conveyor speed. The iterative approach ramped the height h until it was
larger than the reference area. The plant has a set-point for 330[mm], and the model
predicts a set-point of 380[mm] running at the operating point which might indicate
that the angle of 30° is too large. However, this was kept the way it is described
here.
q q s2
A = 0.5((2sin−1( h(2r−h))/r)r2 −2 h(2r−h)(r−h))+ tan(30) (3.13)
2
28 |
Chalmers University of Technology | 3. Methods
h
2
30°
s
α
s
r
h
1 b
Figure 3.10: The split of the geometries defining the bed. Left is the approximated
shape of the conveyor profile and to the right the two geometries separated. The
distance b is the arc length of the circle segment.
The PI loops have been changed slightly from the parameters that were initially
obtained from the SCADA system, representing those used in the PLC where the
control loops are implemented. In Table 3.1 the parameters K and T are noted
p i
for the simulation and those used by the real plant. The standard PI controller in
MATLAB Simulink was used in the simulation. The parameters used in the model
were slightly adjusted for the model to be able to handle start up sequences without
any added logic. The tuning of the parameters was done iteratively by simulating
the model and monitoring outputs and control signals.
Table 3.1: The parameters of the PID-loops on section 406 both the ones used in
the model and on the actual plant.
Controller model:K model: T Plant:K Plant: T
p i p i
Silo Feeders 0.2 14 0.2 14
Silo Feeder SP 1.5 300 1.5 300
HPGR bin Feeders 0.2 150 0.2 300
HPGR feed conveyor 0.2 50 0.7 150
HPGR roller speed 0.2 220 0.65 120
HPGR screen feeder 1 -2 40 -2 40
HPGR screen feeder 2 -2 40 -1.8 40
The controllers in the PLC also have specification regarding deadband included in
them, typically around 5% of the setpoint. The controllers used in the simulation
model did not include those.
Interlocks were only implemented around the HPGR feeding arrangement, blocking
the chute from becoming overfull and allowing for catching up if the level in the
chute was lost during operation. One major difference between the model and the
actual plant is that the advanced controller used to regulate the level in screen bin
29 |
Chalmers University of Technology | 3. Methods
has not been implemented. The screen bin is instead controlled to maintain a 50%
level with a PI-controller.
3.2.2 MPC development
The first step to develop a new controller is to investigate if there are enough degrees
of freedom in the circuit to reach all set-points for the controlled variables. Reaching
all set-points is only possible if the number of manipulated variables (MV’s) is equal
or greater than the number of Controlled variables (CV’s). The manipulated and
controlled variables are listed in Table 3.2.
Table 3.2: A list of the considered MV’s and CV’s
CV MV
BIN001 Level FE001/2
BIN002 Level FE003/4
HPGR Chute weight FE005/6
CV002 Level CV002 conveyor speed
- HPGR roller speed
The actuators listed in Table 3.2 are available for us to control. The HPGR chute
weight is essential to keep the crusher choke fed. The level on the CV002 can be
controlled by use of feeder FE003 and FE004. The speed of conveyor CV002
will regulate how quickly material arrives in the crusher chute. The feeders and
the conveyor speeds are coupled, and both are required to keep the chute full. If
level on the belt needs to controlled or not can be investigated, however since this
is included in the current set up it was kept.
Removing two CV’s and two MV’s from Table 3.2 the number of MV’s is still larger
than the number of CV’s, hence there is room for an additional control objective.
After confirming that there are enough degrees of freedom in the system to maintain
all wished set-points, the controller can be developed. An MPC controller consists
of a process model of suitable form, in this case when using the solver software
FORCES Pro [13], a state space model, additionally a cost function and if needed
constrains. The cost function includes the set-points and possible minimization or
maximization objectives. The software solves a quadratic program (QP-problem).
On the form is shown in Equation 3.14. The solver also allows for adding limits
on upper and lower bounds on state variables and inputs, as well as inequalities on
states and inputs.
N−1
minimize xT Px + X xTQx +uTRu +fTx+fTu
N N i i i i x u
i=0
subject to x = x
0
(3.14)
x = Ax +Bu
i+1 i i
x ≤ x ≤ x
i
u ≤ u ≤ u
i
FORCESProisafastnumericalsolverforembeddedcontrollersthatoncegenerated
30 |
Chalmers University of Technology | 3. Methods
The process model used in the controller was developed with the following assump-
tions:
• Only tracking mass flow in the controller
• All conveyors represent a fixed delay
• Mass split at the screens is constant during one simulation
• The bins are pure integrators with a fixed capacity
The controller uses a prediction and control horizon of 70 steps, where each step is
10 seconds long resulting in predictions 11 minutes and 40 seconds into the future.
No further investigation in how short the horizon could be was made while still
achieve good results, however, in general, the prediction horizon should be in parity
with the settling time of the system. No experiments were concluded to attempt to
find this value. The controller on section 405 uses a 13 minutes prediction horizon,
and it was therefore concluded that testing the controller with a 70 step prediction
would be the first approach. The FORCES controller has an equally long prediction
and control horizon by default, and it was decided to keep it that way for this work.
Using the process layout, the length of the conveyors, the estimated capacity of the
bins and the current operating point’s split ratio for the screens. The process layout
with the notation used in the state space is shown in Figure 3.11. Only one version
of this controller was tested, and the objective was chosen to include the two bin
levels and to maximize the product on the product belt. Only initial tuning of the
controller was done to arrive at a stable and appropriate controller behavior.
The state space model is a 67 state model with a sampling time of 10 seconds. The
controllerhasthereforebeenplacedinatriggeredsubsysteminthesimulationmodel,
which runs every 10 seconds, carrying out the optimization. The three optimization
variables, u , u and u are supplied to PID-controllers as set-points. According
1 2 3
to the MPC scheme, only the first set of control signals is applied to the process.
The method of supplying set-points from the advanced controller to the PIDs is a
common approach when using advanced controllers [36]. The developed state space
model is not observable in its pure form and needs an observer to work properly.
In this case, the bins are sampled with level sensors, the weightometers are located
on all conveyors except CV005 and CV006. By sampling the weightometers every
10 seconds and using a shifting buffer, since the conveyors move at constant speed
the delay is constant and the number of stored readings depend on the number of
states used to model the specific conveyor. This approach is fully possible for the
real process as well, with the exception of the belts that do not have weightometers
installed. The result is full state access, and there is no need for an additional
observer for the controller to work properly.
It is assumed that the first by the controller calculated input acts on the process at
time t+1, where t is the current time. The initial condition can therefor be stated
as the autonomous response from the previous controller calculation and the current
states.
32 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.