University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Chalmers University of Technology
|
5
Methodology
5.1 Initial studies and planning
AsmentionedinSection2.1, thebasisofthisprojectisthemodeldevelopedbySubiaco[5]
and therefore a thorough review of that report was essential to understand what kind of
problemsneededtobetackledinthisthesis. Furthermore,acomprehensiveunderstanding
of the steam network at Preem refinery and a basic understanding of the refinery process
was necessary, see Section 2.2.
Inordertoworkwithandunderstandtheoriginalmodel,athoroughstudyofthestructure
of Aspen Utilities Planner was conducted. The study of the program combined with the
thesis report of Subiaco [5] provided knowledge about how the original model was built
and the ideas behind its construction.
A more general literature review of examples where similar models were investigated and
implemented was also conducted. Also the practical aspects that are of importance when
investigating a real process were reviewed. Both of these topics were described in Section
1.5.
5.2 Verification of parameters
Validation of the model was achieved by identifying key variables in the system and re-
check the values and constraints that Subiaco [5] calculated. Results and variables that
were investigated are presented in Section 6.1 together with a comparison of the values
used by Subiaco for the same variables.
5.3 Data collection
The data collection started by gathering the data tag for each equipment that is related to
the steam network. This was carried out in collaboration with Preem’s staff. Meters for
flows, temperature and pressure are spread around the plant. They measure up to 3 times
asecondandthedataaredirectlysendtothecontrolroomandstoredindifferenttemporal
resolution. A process diagram showing the location of all data collection points within
green circles is presented in Figure 5.1. The producers and consumers for each header are
lumped together and are represented by a single producer and a single consumer.
19
|
Chalmers University of Technology
|
5. Methodology
Table 5.1: Basic information for all scenarios.
Operational Averaging Creator/ Time span/
Scenario
situation time creators dates
0 Free - Subiaco -
1 Stable Instant Subiaco 13/9-2015 (3.10 AM)
2 Stable Instant Subiaco 14/7-2015 (2.50 AM)
HRSG:s and
3 Instant Subiaco 16/4-2015 (3.10 PM)
230 area down
SG2101 and
4 Instant Subiaco 12/1-2016
ICR down
5 Stable Instant Subiaco 13/9-2015
Stable and Gunnarsson and
6 1 week (2-8)/1-2018
high utilization Kobjaroenkun
Stable and Gunnarsson and
7 1 week (22-29)/12-2017
high utilization Kobjaroenkun
FCC unit Gunnarsson and
8 1 week (1-4)/4-2017
down Kobjaroenkun
ICR and Gunnarsson and
9 1 week (16-22)/5-2016
HPU down Kobjaroenkun
Stable and Gunnarsson and
10 1 day 3/1-2018
high utilization Kobjaroenkun
Stable and Gunnarsson and
11 1 day 23/12-2017
high utilization Kobjaroenkun
FCC unit Gunnarsson and
12 1 day 2/4-2017
down Kobjaroenkun
ICR and Gunnarsson and
13 1 day 17/5-2016
HPU down Kobjaroenkun
VDU, ICR, Gunnarsson and
14 1 day 10/3-2018
HPU and FCC down Kobjaroenkun
VDU, ICR Gunnarsson and
15 1 day 16/3-2018
HPU and FCC down Kobjaroenkun
Gunnarsson and
16 - Latest values -
Kobjaroenkun
For the scenarios that were not used in the validation process there were different reasons;
Scenario0wasnotusedduetoitisusedtotestthechangeinsystembymanualinputfrom
the user. Scenario 1 was not used since it should be enough to pick one stable operating
condition case from Subiaco. Scenario 4 was not used due to unsteady-state operating
conditions. The remaining scenarios (Scenarios 7,9, 11, 13-16) created by Gunnarsson
and Kobjaroenkun were not used since it was considered that they would not provide new
results compare to the scenarios that were used. However, insights from the results from
23
|
Chalmers University of Technology
|
5. Methodology
the other scenarios were further strengthened by analysis of Scenarios 1, 7 and 11, see
Section 6.4.1.5.
5.5 Tuning of data
5.5.1 Steam mass balances over headers
In order to make the model as accurate as possible, mass balances over the VHP, MP
and LP headers were set up. From the discussions with Preem staff and supervisors at
Chalmers, it was decided that an error less than 10% of the incoming steam flow to the
header would be acceptable, see Equation 5.1.
|m −m |
Error =
totinmeas totoutmeas
< 10% (5.1)
mea
m
totinmeas
Equation 5.1 indicating the measurement error was used to assess the deviations over a
whole header. An indication of measure of model error can be seen in Equation 5.2 which
was used to check the error for a specific flow or unit. Equation 5.2 was used primarily
for the let-down valve flows.
|m −m |
Error =
meas modeloutput
< 10% (5.2)
mod
m
totinheadermeas
It is assumed that, on each header level, there are steam flows that either leave the
system or are let down through let down valves or turbines to the following header level
and all of them are not measured. The unmeasured steam flows can, together with
possible measurement errors, be aggregated into a parameter representing the mining and
erroneous measurements which is set to the difference between the measured incoming
and outgoing steam. In the model, these unknown steam flows were lumped together
and represented by an additional steam consumer block for each header which has a
constant value independent of scenario. The mass balance calculations were done based
on the measured steam flows from Preem and estimated steam flows through operational
turbines and the results are presented in Section 6.1.5.
5.5.2 Comparison with the validation results from the original
model
It was decided to make a comparison between validation results for Scenarios 2 and 3
from the original and new model versions. The reason for this was to observe how the
changesinthemodelandinthedataforsteamflowsaffectthevalidation. Thecomparison
between the original and the new model is based on the latest version from Subiaco [5].
With the original model version, the changes in the new model will be compared with
the starting point of the model in this project, as the results by Subiaco [5] could not be
reproduced with the original model version. The results from these comparisons can be
seen in Section 6.4.2.
24
|
Chalmers University of Technology
|
6
Validation of model
The validation of the model is divided into different parts, the first part being checking
of important parameter values, verifying flows and checking the reliability of measurment
sensors. The second part is to validate the model against operational data sets from the
refinery, so called "scenarios".
6.1 Verification of model parameters and process flows
This section describes the updates and corrections of model parameters that have been
implemented in the new version of the model.
• Variables such as efficiencies.
• Constraints for steam producers such as the boilers.
• Power demand of pumps and compressors.
• Operational possibilities of pumps and compressors.
• Verification of steam demands at steam headers for process steam consumers, valves
and other non-measured steam use.
6.1.1 The feedwater temperature
In the original model, the temperatures of the feedwater flows to the boilers were set to
ambient temperature and therefore the enthalpy increase for the water was too high, thus
overestimating the amount of fuel needed for the boilers. This was corrected to 115 ◦C
after discussion with Preem staff and supervisor at Chalmers. This change gave a more
accurate fuel consumption when comparing the model value to the measurement value.
Table 6.1 shows the effect after changing the feed water temperature from 25 to 115 ◦C
for one of the scenarios.
Table 6.1: Effect of feedwater temperature on fuel consumption.
Variables Before After Measurement
Feedwater temperature [◦C] 25 115 -
Total fuel consumption [Sm3/h] 22261 21844 21625
25
|
Chalmers University of Technology
|
6. Validation of model
Figure 6.2: Boiler efficiency against LHV value for SG3201 boiler.
Constraints regarding the boilers were investigated and the maximum and minimum pro-
duction for each boiler were identified, they are presented in Table 6.2. Although the
production is rarely as high as 90 t/h, the value can be reached according to Preem staff.
The lower limit is of more importance since the boilers more often operate close to their
respective minimum load. The difference in minimum load between the original and the
updated model is important since the refinery staff wants to have two boilers operational
at all times since it is a severe operational risk to only use one. At the same time, overpro-
duction of steam is not desirable and looking at Table 6.2 there will be a large difference
in production if for any combination of boilers operated together.
Table 6.2: Load constraints on the steam boilers after modification, Subiaco values in
parenthesis.
Process unit Maximum load [t/h] Minimum load [t/h]
SG3201 90 (50) 12 (20)
SG3202 90 (50) 12 (20)
SG3203 90 (50) 24 (20)
In addition, in the original model from Subiaco [5], a correction factor denoted "Perfor-
mance Factor" in Aspen Utilities Planner was used in SG3202 boiler and set to 0.74 for
the validation purpose. The performance factor acts like an additional boiler efficiency
which should already be included when the boiler efficiency was calculated and also the
definition of the performance factor remained unclear. Therefore, in this work, it has been
considered that the performance factor should be set to 1 for all the boilers and would
not be used as a tuning parameter anymore.
6.1.3 Pumps and Compressors
The power output required from a turbine for a pump or compressor in turbine mode
was assumed to be equal to the power requirement from the motor when the pump or
compressor is in motor mode. The current used by a motor unit is measured at the
refinery and the power can be calculated using Equation 6.2.
27
|
Chalmers University of Technology
|
6. Validation of model
√
P = 3∗U ∗I ∗cos(ϕ)∗ε (6.2)
WhereP isthepowerdemand, U isthevoltage, I isthecurrent, cos(ϕ)isthepowerfactor
which for most pumps and compressors could be obtained from manufacturing data and
otherwise an estimation was made in collaboration with Preem staff and ε is the motor
efficiency. The losses in a turbine are accounted for by the isentropic efficiency and in
Aspen Utilities Planner there is no isentropic efficiency but the enthalpy levels used in
the model are the real ones which mean that losses are already included. The losses in a
motor is accounted for by the motor efficiency (ε) this can be entered into Aspen Utilities
Planner. Comparison between the power demand values obtained using Equation 6.2 and
the values used by Subiaco showed some deviant values but at least 75% of the pumps and
compressors were within 10% limit. Units that were deviating significantly have already
been corrected. The values for power demand of pumps and compressors in Subiaco’s
model seem to be the maximum load based on manufacturing data from Preem.
The configuration of parallel pumps and their possible operations are of importance. In
some cases, there are three pumps for one task, A, B and C where two of them are driven
by turbines and the third is a motor. This setting is for pumps and compressors that
are essential for refinery operation such as boiler feedwater pumps. Only one of the three
pumps is in operation at a time, when a turbine is set to not be in operation the solver
will take it as the motor is in operation. In cases where there are more than one turbine,
this will cause errors since electricity and/or steam demand that should be excluded will
be included in such a case. This will affect the results and can be seen as not feasible.
This problem has been solved solved by setting the power demand of the extra turbine
to zero, in this way there would not be an effect if the turbine is considered to not be in
operation. Similarly for turbines that are only operational during start up and shut down
the power demand was set to zero.
A by-pass flow over all turbines has been added to the model in the new version. For
safety reasons, each turbine is equipped with a by-pass which was not included in the
original model. The by-pass is needed to make the turbine spin even if the operational
mode is motor. The amount of by-pass steam is small for each turbine and documenta-
tion is inadequate, but by using information from the new VGO project and making an
estimation based on the power demand of the pumps, the amount of steam by-passed for
each turbine was estimated.
6.1.4 Let-down valves
The constraints for the let-down valves between the headers were set to more realistic
values based on the manufacturing information, but also by plotting the flow as a func-
tion of the valve opening and thus obtaining an equation that could be used to verify
the maximum and minimum flows of the valve. An example of the impact from faulty
measurements at the let-down valves and how the equation for steam flow as a function
of valve opening was used to check the accuracy of the steam flow can be seen in Section
6.4.
28
|
Chalmers University of Technology
|
6. Validation of model
6.1.5 Correction of steam demands at headers
The values of the steam demands at the different headers (heat exchangers, strippers,
etc.) obtained by Subiaco have been checked and some discrepancies were detected. It is
obvious that the FCC unit consumes steam from the VHP steam header but in the model
this steam demand was included twice. The steam tracing at the MP header for heating of
pipes and tanks were included and entered as consumption of steam in the original model
version. The steam tracing for tanks was judged to be modelled correctly. However a
discrepancy was identified for steam tracing of the pipes. The steam condensate from
this steam trace concerning the pipes is recycled back to the water system and should
therefore not be added to the consumption of make-up water. Steam tracing to the tanks
however is a consumption of make-up water. This was incorrectly modelled in the original
model version and has now been corrected.
Another error in the original model was that the deaerator was considered to consume
approximately 12 t/h of LP steam, but the steam that is consumed in the deaerator is
determined by the vapour/liquid equilibrium in a condensate vessel. This production of
steam from equilibrium was not accounted for as a steam producer in the model, thus the
consumption of this steam should not be included in consumption of steam in the model
either. As the LP steam enters the deaerator, it is condensed and used as feedwater to
steam producers. Therefore the whole process can be considered as an internal circulation
of steam and condensate and should not be considered as a pure consumption.
After correction of inconsistencies in the modelling of some steam consumers, the mass
balances for the steam headers were evaluated. This showed that for the four scenarios
mentioned in Section 6.4, there was often an excess of VHP steam, thus indicating an
unknown consumer at this level. The MP level generally showed a deficit of steam but
adding an unknown producer of steam was considered to be unrealistic and consequently
thedifferencebetweenproductionandconsumptionwasassumedtodependonthequality
of the measurements. At the LP level there was an excess of steam which was also to
attribute an unknown consumer. For the HP steam header, mass balances were only
calculated for the first three scenarios. This since this header has more free variables
than the other headers and is also connected to a smaller number of units, thus the mass
balances were for these three scenarios well within the 10% limit, as defined in Section
5.5.1 it was decided to accept the model for this header without adding any additional
parameters representing unknown steam flows. The extension of the model in the form
of consumers, inflows and outflows can be seen in Table 6.3 and the new flowsheet can be
seen in Figure 6.4.
Consumption of steam is considered to leave the system while outflow and inflow are
steam flows between two headers, so the outflow from VHP header is equal to the inflow
to MP header. The values shown in Table 6.3 were obtained by trial- and error to make
sure the error according to Equation 5.1 became less than 10%. The combination of
values shown in Table 6.3 is not a unique solution to make the system deviate within
10% limit. There could possibly be other combinations that result in the balance within
the boundary but not all combinations were tested. However, this is the solution that
gives the overall best results of the combinations that were tested, by using trial and error
29
|
Chalmers University of Technology
|
6. Validation of model
and also finding a combination that fits the most scenarios the values in Table 6.3 was
selected. These steam demands and steam flows were not included in the original model
by Subiaco. Subiaco assumed that all undefined outflow from the system flows from the
LP level, this flow were retained as it was in the original model since that parameter
influences the water make-up balance. Insertion of these steam parameters made the
model better match more scenarios. The values can be regarded as tuning parameters for
the model. These parameters were inserted at VHP, MP and LP header. VHP and LP
header were given unknown consumptions and the unknown inflows and outflows were
added between VHP-MP, MP-LP and LP-deaerator, see Table 6.3. The total inflow of
steam to each header level is presented in Table 6.4.
Table 6.3: Additions to model in form of a constant flow to miscellaneous unspecified
steam consumers.
Steam leaves system Steam from header to header
Header Consumption [t/h] Outflow [t/h] Inflow [t/h]
VHP 10 1 0
MP 0 5 1
LP 10 3 5
Table 6.4: Total inflow of steam at each header in t/h for Scenarios 2, 3, 10 and 12.
Total inflow to each header [t/h]
Header Scenario 2 Scenario 3 Scenario 10 Scenario 12
VHP 153.4 104.8 144.2 134.6
MP 186.1 92.2 198 168.3
LP 221.7 174.7 199.3 205.5
Noteveryheaderforeveryscenarioiswithinthe10%error, thereareafewscenarioswhere
the error is around 15%. The reasons for this large deviation are from the operational
statusoftherefineryandreliabilityofthevalvemeasurements. Whenpartsoftherefinery
are shut down the fixed values from Table 6.3 deviates more from their true values. This
is because flowmeters can get saturated with condensate and the measurement devices
can be by-passed. Hence values for the let-down values become unreliable. By plotting
the steam flow through the valve together with valve percentage opening the reliability of
the valve can be determined. An example of the reliability of the let-down valve between
VHP and MP header can be seen in Figure 6.3, which is from Scenario 12. The red line
represents the opening percentage of the valve while the orange line corresponds to the
amount of flow in t/h. It is clear that at the end of the time span, the valve is around 21%
open but the flow is 0 t/h despite a pressure difference of 28.4 bar. The staff at Preem also
stated that specifically the valve between VHP and MP header has a minimum setting of
4% in valve opening which corresponds to approximately 7 t/h. This constraint has been
added in the new version of the model, but it is considered a weak constraint which means
that the solver can override it in order to solve fundamental equations for example mass
balances. The lower limit value of 7 t/h has been used when measurement values have
been < 7 t/h when calculating mass balances. When using the model for optimization
30
|
Chalmers University of Technology
|
6. Validation of model
6.1.6 Conversion factor
In the original model version it was discovered that the wrong conversion factor between
standard cubic meter (Sm3) and normal cubic meter (Nm3) had been used. From the
discussion with Preem staff, the definitions for Sm3 and Nm3 at Preem are 15 ◦C, 1.01325
bar and 0 ◦C, 1.01325 bar, respectively. By using Equation 6.3, T is 288.15 K and T is
1 2
273.15 K. This provided a conversion factor of 1.0549 Nm3.
Sm3
V P ×T
1 2 1
= (6.3)
V P ×T
2 1 2
6.1.7 Investigation on LHV of fuel gas and LNG
One improvement that has been done was to set the LHV for LNG and fuel gas to have
specific values for each scenario. This improved the precision and accuracy of the model.
Based on an investigation conducted two years ago shown in Figure 6.5, it is reasonable
to set the LHV of LNG (orange values) as a constant value approximately 45 MJ/kg. On
the other hand, the LHV of mixed fuel gas (green values) varies within the range of 34-46
MJ/kg, which has a significant effect on the duty of boilers. This variation is obviously
from the LHV of the refinery gas which is not measured. So, the first modification done
in the model of the fuel gas system was to have the LHV of fuel gas as a variable whose
value needs to be imported for every scenario.
Figure 6.5: The LHVs of the fuel gas and LNG where the orange and green represent
LNG and fuel gas mix, respectively.
33
|
Chalmers University of Technology
|
6. Validation of model
Further investigation of the fuel gas system revealed that it was not adequate to model
the supplied refinery gas and LNG flows by a fixed LNG % by volume. The purpose of
the optimization is to find an opportunity to run the whole steam system network with
minimum operating cost including a possibility to be operated with a reduced use of LNG.
Coupling fuel gas and LNG by a constant ratio could not yield such results. The improved
model of the fuel gas system was performed so that the amount of refinery gas supplied
to the steam boilers and other consumers cannot change, it is a constant value and only
the flow of LNG changes. These values can be retrieved from Preem process data, by
setting the amount of the refinery gas to a fixed value the model became more realistic.
Moreover, this change also gave better results for the total utility cost calculation from
optimization mode since in the original model the cost for LNG was calculated based on
the flow of fuel gas. The result after the improvements in the fuel gas system is shown in
Section 6.3.2.
Regarding the LHV of the fuel gas it was discovered during the data collection that the
LHV of fuel gas became unrealistic for some of the scenarios. The LHV of the fuel gas
from the measurement is in volume basis and seems to be quite stable. Although when
it is converted to mass basis, the value starts to deviate. These deviations was observed
when the density of the fuel gas became low, approximately lower than 1 kg/m3. The
LHV then became higher than the LHV of pure LNG. This was considered unrealistic
since the major component of fuel gas is the refinery gas which has a lower LHV than
LNG. It was assumed that the density value is not always reliable and a method to tackle
this problem was introduced. For Scenario 8 and 12 in Sections 6.4.1.4 and 7.4.3, the
calculated LHV of the fuel gas were 48.2 and 50.3 MJ/kg, but they were changed to be
35 and 38.7 MJ/kg, respectively. The method of changing this was to use the fraction
of LNG in the fuel gas as a validated value. The calculation has been done according to
Figure 6.6.
34
|
Chalmers University of Technology
|
6. Validation of model
Figure 6.6: Iterative LHV procedure.
The simulation with unrealistically high LHV of the fuel gas resulted in too little LNG use
inthemodelornegativeflowofLNGwhichmakesthemodelneitheraccuratenorreliable.
Decreasing the LHV of fuel gas input to the model decreased the use of LNG which leads
to iterative process. The limitation is that the difference the fraction of LNG from the
model and measurement must be within 5% error. This method should be implemented
when the extracted density of fuel gas from Preem system has a value below 1 kg/m3
6.2 Custom script
In Aspen Utilities Planner, there is an opportunity to write custom scripts to specifically
control the unit behavior. The custom scripts were used in the first version of the model
for let-down valves between some headers. The existing script was written in Visual Basic
language in a hierarchical order with if-else conditions. If-else conditions were written for
controlling specific valves when there were excesses or insufficient amount of steam flows
at a particular header. For example, when the excess of VHP steam is larger than the
allowable flow between VHP and MP headers, the rest of the flow will be distributed to
the HP header which is a local header instead.
However, some equations control the let-down valves by setting a constant value for the
steam flow for example, for the valve 81PC241 that connects the VHP header with the
HP header. Attempts were made to make the flow through this valve dependent on the
steam production and consumption at the HP header. However, as the flow variable
was designed as a free variable, the number of degree of freedom in the model became
larger thus causing the system to be unspecified. This could be solved by setting a free
35
|
Chalmers University of Technology
|
6. Validation of model
parameter as fixed within Aspen Utilities Planner but that resulted in unrealistic values
in other parts of the model. According to the staff at Preem the valve 81PC241 is mainly
opened during start-up of 810 area which means that the system will not be at steady
state and therefore it was decided to keep this variable as fixed by the script.
AnimprovementthatwassuccessfullyimplementedinthescriptisthescriptforLPsteam
venting to the atmosphere. The script is activated only when the optimization mode run
results in a negative value for the LP vent steam flow. The water mass balance in the
model for each header was not coupled to the steam production at the VHP steam header,
which means that in optimization mode, the LP steam venting valve can go to negative
values to satisfy the mass balance at the LP steam header. The new added equations
allow the script to couple the LP steam header to boilers and ensure positive steam flows
also for the venting to the atmosphere. This is further discussed in Section 7.4.
6.3 Modification of fuel gas system
The fuel gas modification was performed by verifying the assumptions in the original
model from Subiaco [5] and re-modelling the relationship between refinery gas and LNG
supply.
6.3.1 Fuel gas system re-modeling
The fuel gas system should be re-modelled since the fuel gas system was originally mod-
elled using Equation 4.1 assuming a fixed share of LNG for a given scenario to provide
the heat from fuel header to the boilers. In scenario mode, this way of modeling should
be adequate to obtain correct results. In optimization mode, this method will not be
sufficient to capture the fact that reduced fuel use will primarily lead to a reduction of
LNG import and thereby reduce the share of LNG in the fuel gas mix.
The solution of the problem mentioned in the above paragraph is to model the refinery
gas and LNG separately and by setting the volumetric flow of the refinery gas as a fixed
input value and the flow of LNG to be free. So, the simulation will calculate the amount
of LNG flow needed to fulfill the boilers’ duties. Setting the flow of the refinery gas as a
fixed value also requires a fixed molecular weight and the LHV at the refinery gas supplier
box. Instead, the molecular weight and the LHV of the mixed fuel gas in the model need
to be free variables. However, the molecular weight and LHV are measured at the fuel
gas header in the refinery after mixing with LNG, but these values are assumed to be
close enough to the values of the refinery gas before mixing with LNG and used as input
for the refinery gas supply. This assumption is justified by the fact that the proportion of
LNG to the refinery gas is small and normally not bigger than 15% by volume which has
insignificant effect on the LHV of the fuel gas after mixing. With this assumption, there
will be a small and negligible difference in the LHV of fuel gas from measurements and
the model results. Table 6.5 shows the comparison of the LHV value of the mixed fuel
gas from measurement and simulation for Scenarios 2 and 3.
36
|
Chalmers University of Technology
|
6. Validation of model
Table 6.6: Verification of model outputs for the fuel gas system against measurement
values for Scenarios 2 and 3.
Scenario 2 Scenario 3
Variables Model Measured Model Measured
LHV of fuel gas [MJ/kg] 38.1 38 37.2 36.6
Percentage of LNG [%] 2.8 5 10.9 10
Fuel gas flows to
3276 4308 3034 2803
the boilers [Nm3/h]
Total fuel gas flows
44588 45620 23043 22812
[Nm3/h]
A comparison cannot be done against the original model since there were many changes
applied in the new model making the comparison between the results from the original
and new model not applicable. From Table 6.6, it can be seen that the new model results
in an accurate value for the LHV of fuel gas and the total fuel gas flow for Scenario 3
also shows rather good agreement for the LNG share and fuel gas flow to the boilers.
In Scenario 2, the deviation between the fuel gas flow to the boilers from the model
compared to measurement is more pronounced. However, the deviation can be seen as
less important when comparing to the total fuel gas flows. This deviation is suspected to
come from the molecular weight of the refinery gas put into the model. The molecular
weight of the refinery gas is not measured and the assumption of molecular weight equal
to 35 kg/kmol, from Subiaco [5], has been used. The effect from changing the molecular
weight of the refinery gas for Scenario 2 and 3 is further analyzed in Sections 6.4.1.1 and
6.4.1.2.
The molecular weight of the refinery gas could be calculated from the mixed fuel gas
composition and the composition of imported LNG and LNG flow (measured). Accessing
historical data that shows the composition for the LNG could not be done, only live data
was available, also accessing historical data regarding the composition for the imported
LNG could require approval from the company that sells the LNG to Preem. Information
regarding the pressure and temperature at the specific point of interest i.e. the inlet to
the steam boilers would be needed to get as accurate value as possible. The LHV of the
refinery gas can be calculated once the composition of the refinery gas is known. Thus
with the information mentioned, the molecular weight and LHV of the refinery gas could
be calculated for previous operational situations. For live data the calculations could be
performed as mentioned.
It is not adequate to verify the modified model with only 2 scenarios. The new scenarios,
Scenarios 6 and 10, have been created and used to verify the new model. Scenarios 6 and
10 represent normal operating conditions in the beginning of January 2018. The data
for Scenario 6 was collected with a week time average but the data for Scenario 10 was
collected with a one-day average.
38
|
Chalmers University of Technology
|
6. Validation of model
Table 6.7: Verification of the model outputs for the fuel gas system against measurement
values for Scenarios 6 and 10.
Scenario 6 Scenario 10
Variables Model Measured Model Measured
LHV of fuel gas [MJ/kg] 38.9 38.6 39 38.8
Percentage of LNG [%] 6.9 8.3 7.5 8.82
Fuel gas flows to
1613 2358 1772 2483
the boilers [Nm3/h]
Total fuel gas flows
49305 50050 49052 49763
[N3m/h]
Table 6.7 demonstrates the comparison between the results obtained from the new model
and the measured values for Scenarios 6 and 10. Similarly to Table 6.6, the model gave
accurate results when compares to the measurement values for all values except the fuel
gas flow to the boilers. It was expected that this deviation occurred from the initial
molecular weight of refinery gas put into the model. Scenarios 6 and 10 represent the
same situation and it might be that the molecular weight of the fuel gas at this specific
time was not 35 kg/kmole since both of the scenarios gave roughly the same relative
difference in fuel gas flows to the boilers. The effect of LHV and molecular weight of the
fuel gas were studied further and discussed in Section 6.4.1.3.
6.4 Validation against operational data
In this validation the values generated by the model are compared with the operational
values that were extracted from the refinery. The scenarios that were chosen to be used
for validation were scenarios 2, 3, 6/10 and 8/12 and more detailed information about
settings and values for the different scenarios can be found in Section 5.4.
• Scenario 2: Occurred during the summer period, Scenario 2 was chosen due to high
air temperature.
• Scenario 3: Chosen due to shut down on both HRSG:s and shut down on NHTU/
Reformer unit.
• Scenarios 6 and 10: High utilization of the refinery and stable operation.
• Scenarios 8 and 12: FCC units were shut down.
6.4.1 Results after model changes
The validation is mainly performed using the Excel result interface created by Subiaco.
The10%validationlimitischeckedforthedifferencebetweenmeasuredandmodeloutput
values and also for the mass balance difference over each header based on measurements.
6.4.1.1 Scenario 2
The results from Scenario 2 are presented in Table 6.8. Table 6.9 presents the results
regarding mass balances and the 10% validation limit can be seen. In Table 6.9, the first
column "Error [%]" is calculated using Equation 5.1 for the three headers and Equation
39
|
Chalmers University of Technology
|
6. Validation of model
5.2 for the three valves. Second column "Error [t/h]" is the absolute difference, for the
headers it is between inflow and outflow and for the valves it is the difference between
measured value and model output value. This setting will be used for the remaining
scenario validation.
Table 6.8: Validation results for Scenario 2.
It is clear that most of the results are well within the limits but there are larger deviations
at the MP header. The deviations at the MP header are assumed to originate from steam
tracing that mainly is taken from the MP level. The deviations that are at the MP header
are expected since at this header there are number of unspecified consumers of steam.
Table 6.9: Difference between in- and outflow at the headers (row 1-3) and difference
betweenoutputandmeasuredvaluesforlet-downvalves(row4-6)calculatedbyEquations
5.1 and 5.2 and mass flows for Scenario 2.
Parameter Error [%] Error [t/h]
VHP header 0.5 0.9
MP header 5.7 10.7
LP header 1.4 3.1
VHP-MP valve 0.6 0.9
MP-LP valve 3.6 6.7
LP venting 1.6 3.5
In Table 6.8 shows the use of fuel gas by all measured consumers. The difference between
the measured and output value is small, however, a comparison of the measured flow to
the boilers and the model output value is also interesting and indicates how sensitive the
fuel gas system is to the LHV and also the molecular weight of the fuel gas. In Table
6.10 the change in fuel gas flow to the boilers while changing the molecular weight or the
40
|
Chalmers University of Technology
|
6. Validation of model
LHV of the refinery gas. The values of LHV and molecular weight from Table 6.10 used
for Table 6.8 is the second row. As can be seen, small changes in these two variables
affects the flow significantly, however, the total consumption of fuel gas remains relatively
unchanged due to the size difference of the flows.
Table 6.10: Comparison of values for total flow of fuel gas to the boilers when changing
molecular weight and LHV of the refinery gas for Scenario 2.
Flow of fuel gas to boilers [Nm3/h]
Measured value 4308
MW=35 kg/kmol
3276
LHV=38 MJ/kg
MW=30 kg/kmol
3819
LHV=38 MJ/kg
MW=35 kg/kmol
3365
LHV=37 MJ/kg
6.4.1.2 Scenario 3
In Table 6.11, the validation of Scenario 3 can be seen. The output results are more
deviating compared to Scenario 2, reasons for this are considered to be the shut-down
of different area units and that the data extracted for Scenario 3 was taken shortly after
a change of refinery operating condition and therefore might not be in steady-state. In
Table 6.11, the time point that was used is the same as the one Subiaco used.
Table 6.11: Validation results for Scenario 3.
In Table 6.12, the sensitivity of the fuel gas flow to the boilers depending on molecular
weight of refinery gas and the LHV of refinery gas can be seen, the row "Measured values"
presents the value used to obtain the results in Table 6.11. The same pattern as in Table
41
|
Chalmers University of Technology
|
6. Validation of model
6.10 can be observed, changes in the molecular weight and LHV of the refinery gas affect
the flow of the fuel gas. This implies that caution should be taken when extracting data
for LHV and calculating the molecular weight. It was assumed that one of the reasons
why this scenario overestimates the fuel gas consumption to the boilers is because the
measured value is to low. When looking closer at the measured values it was discovered
that one of the measured values of fuel gas is unrealistically low compared to the steam
production. By studying fuel gas consumption at similar loads it can be concluded that
thetotalconsumptionoffuelgastotheboilersshouldbeapproximately850Nm3/hhigher
than the measured value in Table 6.12. Thus the underestimation of fuel gas consumption
is more similar to Scenario 2.
Table 6.12: Comparison of values for total flow of fuel gas to the boilers when changing
molecular weight and LHV for Scenario 3.
Flow of fuel gas to boilers [Nm3/h]
Measured value 2803
MW=35 kg/kmol
3034
LHV=36.6 MJ/kg
MW=30 kg/kmol
3505
LHV=36.6 MJ/kg
MW=35 kg/kmol
3085
LHV=36 MJ/kg
In Figures 6.8, 6.9 and 6.10, the values of flow and percentage of valve opening for VHP-
MP, MP-LP and LP-vent valves can be observed. From these figures, it can be deduced
that the system had just made a transition, this since it is clear that the steam flows
and valve openings MP-LP let-down valve and LP-vent valve has made a change. Also
the VHP-MP valve can be considered unreliable since it, during that day, shows no flow
although the valve is around 18% open. In Figure 6.8, the brown line represents the valve
opening and the orange one which is not visible since the value is negative and never
exceeds zero, is the steam flow.
Figure 6.8: VHP-MP valve for Scenario 3, the brown line is valve opening [%].
In Figure 6.9, the brown line is valve opening and the blue one is steam flow. It can also
be noticed that the data seems to be extracted directly after a change of operation. It
could be argued that data before the operational change should be used to ensure the
system was in steady state.
42
|
Chalmers University of Technology
|
6. Validation of model
Figure 6.9: MP-LP valve for Scenario 3, the brown line is valve opening [%] and the
blue line is steam flow [t/h].
Figure 6.10: LP vent valve for Scenario 3, the pink line is valve opening [%] and the
grren line is steam flow [t/h].
In Figure 6.10, it can be seen that around the time the data was collected (yellow line in
Figures 6.8, 6.9 and 6.10) the LP vent makes a spike down and that just before the data
was collected the values where more stable. The green line represents the steam flow and
the pink line valve opening in percentage.
By using data from when the system was in steady-state and the default value for the
VHP-MP let down valve described in Section 5.5.1, the result of the manual mass balance
can be seen in Table 6.13. Results using the measured values used by Subiaco when the
system has just changed can be seen in Table 6.14.
Table 6.13: Difference between in- and outflow at the headers (row 1-3) and difference
between output and measured values for let-down valves (row 4-6) for Scenario 3, with
values before operational change.
Parameter Error [%] Error [t/h]
VHP 8.8 9.2
MP 6.6 6.3
LP 6.9 13.1
VHP-MP valve 8.9 9.4
MP-LP valve 3.5 3.4
LP venting 4.8 9.1
43
|
Chalmers University of Technology
|
6. Validation of model
Table 6.14: Difference between in- and outflow at the headers and let down valves in
percentage and mass flow for Scenario 3, with values after operational change.
Parameter Error [%] Error [t/h]
VHP 4.9 5.2
MP 13.4 12.3
LP 3.3 5.8
VHP-MP valve 5.1 5.4
MP-LP valve 7.2 6.6
LP venting 6.6 11.8
Overall, the deviations are smaller when the system is in steady-state but it also shows
thatthemodelprovidesresultsthataregenerallyacceptable,giventhatmostoftheresults
are within the 10% validation limit. The largest deviations are at the MP header and that
is reasonable since there are a high number of undefined consumers and producers at the
MP header. Also in this scenario, parts of the refinery were shut down for maintenance,
and steam is used as cleaning media during this time with some flow meters being by-
passed consequently, an unknown amount of steam will be used but not measured. This is
a source of error for all scenarios with larger shut downs, the cleaning can have a duration
of up to three days according to Preem staff, also all units are not always cleaned at the
same time, thus there can be long periods with unknown steam consumption.
6.4.1.3 Scenario 6 and 10
Validation results for Scenario 10 can be seen in Table 6.15, and in Table 6.16 for Scenario
6. Scenario 6 is obtained the same time as Scenario 10, but the values are averaged over
a week instead of over one day.
Table 6.15: Validation results for Scenario 10, with averaging time of one day.
It can be seen that the model results are closer to the measured values for Scenario 6.
The reason behind this is assumed to be the averaging of the data values, averaging
44
|
Chalmers University of Technology
|
6. Validation of model
over a week should be more reliable than using a day average according to Preem staff.
Flow through the LP vent valve is quite the same for both scenarios, however, Scenario
6 is more accurate regarding the steam flow through the VHP-MP and MP-LP let-down
valves. This probably originates from the operational setting of pumps and compressors,
some of these units that might have been averaged to motor mode in Scenario 10 have
been averaged to turbine mode in Scenario 6 which is overall a more accurate setting.
Table 6.16: Validation results for Scenario 6, with averaging time of one week.
In Tables 6.17 and 6.18, the results from the mass balances in Scenarios 10 and 6 can be
seen. All values are well within the error limit. This strengthens the suggestion that the
model performs within the acceptable limits for steady-state operations. Also in these
tables the effect of averaging time can be seen as Scenario 6 generally has lower errors
than Scenario 10. The variable that stands out in both cases is the LP vent valve flow.
This is as mentioned earlier not surprising since at the LP header there are a number of
unknown flows of steam that cannot be measured. Furthermore the value that is obtained
fromPreemsystemisnotameasurementbutacalculationbytheirprocessprogram. This
means that there can be doubts about the reliability of the value obtained from Preem as
well.
Table 6.17: Difference between in- and outflow at the headers (row 1-3) and let down
valves (4-6) in percentage and mass flow for Scenario 10.
Parameter Error [%] Error [t/h]
VHP 5.9 8.6
MP 3.6 7.2
LP 5.6 11.1
VHP-MP valve 6 8.7
MP-LP valve 7.4 14.6
LP venting 4.7 9.4
45
|
Chalmers University of Technology
|
6. Validation of model
Table 6.18: Difference for in- and outflow at the headers (row 1-3) and let down valves
(row 4-6) in percentage and mass flow for Scenario 6.
Parameter Error [%] Error [t/h]
VHP 0.6 0.8
MP 4.5 8.5
LP 6 13.2
VHP-MP valve 0.9 1.3
MP-LP valve 1.7 3.3
LP venting 5.8 12.8
A sensitivity analysis for the molecular weight and LHV of refinery gas was done for
Scenarios6and10aswasdonefortheotherscenariosandtheresultscanbeseeninTables
6.19 and 6.20. The row with "Measured values" in both tables represent the molecular
weight and LHV used to obtain the results in Tables 6.15 and 6.16. The difference in
measured value for the fuel gas system is because the boilers produces more steam in
Scenario 10 than in Scenario 6. It can be said that results from Tables 6.10 and 6.12
together with the results from Tables 6.19 and 6.20 imply that that the fuel gas system is
sensitive to changes in molecular weight and LHV for the refinery gas. Since the pattern
was obvious this comparison was omitted in Section 6.4.1.4.
Table 6.19: Comparison of values for total flow of fuel gas to the boilers when changing
molecular weight and LHV for Scenario 10.
Flow of fuel gas to boilers [Nm3/h]
Measured value 2483
MW=35 kg/kmol
1772
LHV=38.8 MJ/kg
MW=30 kg/kmol
2052
LHV=38.8 MJ/kg
MW=35 kg/kmol
1807
LHV=38 MJ/kg
Table 6.20: Comparison of values for total flow of fuel gas to the boilers when changing
molecular weight and LHV for Scenario 6.
Flow of fuel gas to boilers [Nm3/h]
Measured value 2360
MW=35 kg/kmol
1613
LHV=38.6 MJ/kg
MW=30 kg/kmol
1870
LHV=38.6 MJ/kg
MW=35 kg/kmol
1639
LHV=38 MJ/kg
46
|
Chalmers University of Technology
|
6. Validation of model
while NMV stands for "New model version". The common results for both scenarios are
that both the new and the original model perform well regarding the feed and make up
water. It can also be said that the original model version performs better overall for
Scenario 3 than the new model version. However, this is the contribution from the large
changes in steam tracing consumption that Subiaco used as a tuning parameter to fit the
model to each scenario. The accuracy regarding prediction of fuel gas consumption by
the boilers is discussed at the end of this section.
Table 6.25: Comparisonbetweenvalidationresultsfromoriginalandnewmodelversions
for Scenario 2.
Table 6.26: Comparisonbetweenvalidationresultsfromoriginalandnewmodelversions
for Scenario 3.
The results displayed in Tables 6.25 and 6.26 indicate that the modifications that have
been improved in the new model compared to the original have improved the ability to
predict the outcome of different scenarios without manually changing data and assump-
tions between the scenarios. The approach of a model that generically fits more scenarios
is more reliable and also easier to use since the possibility for mistakes when analyzing
new operational scenarios are fewer.
AfurthercomparisonofthemassbalancesthatarepresentedinTables6.27and6.28shows
that based on the 10% validation limit it can be observed that for the stable operational
49
|
Chalmers University of Technology
|
6. Validation of model
situation (Scenario 2) the deviations are of the same magnitude for the new and original
model while for Scenario 3 with operational disturbances the new model version has
greater errors than the original model. This indicates that the new model is well adapted
for operational scenarios with high utilization of refinery capacity, while for scenarios
with operational disturbances such as shut down of different areas the new model is more
unreliable. However, the new model is tested against more scenarios than the original
model. The data and the modelling is more consistent between various scenarios, and the
model is better adapted to be able to handle new operational cases.
Table 6.27: Difference between in- and outflow at the headers (row 1-3) and let down
valves (row 4-6) in percentage and mass flow for Scenario 2, original model values in
parenthesis.
Parameter Error [%] Error [t/h]
VHP 0.5 (1.8) 0.9 (2.9)
MP 5.7 (2.5) 10.7 (4.7)
LP 1.4 (1.4) 3.1 (3.1)
VHP-MP valve 0.6 (1.8) 0.9 (2.9)
MP-LP valve 3.6 (0.9) 6.7 (1.6)
LP venting 1.6 (0.7) 3.5 (1.5)
Table 6.28: Difference between in- and outflow at the headers (row 1-3) and let down
valves (row 4-6) in percentage and mass flow for Scenario 3, original model values in
parenthesis. For new model version result from after operational change is displayed.
Parameter Error [%] Error [t/h]
VHP 4.9 (2.4) 5.2 (2.6)
MP 13.4 (0.8) 12.3 (0.8)
LP 3.3 (3.6) 5.8 (6.1)
VHP-MP valve 5.1 (3.2) 5.42 (3.3)
MP-LP valve 7.2 (4.8) 6.6 (4.6)
LP venting 6.6 (0.1) 11.8 (0.2)
The original model version was validated against 4 different scenarios created by Subiaco,
the new model has been thoroughly validated against two of the scenarios created by
Subiaco but also by 4 other scenarios created by Gunnarsson and Kobjaroenkun. Seven
more scenarios were created by Gunnarsson and Kobjaroenkun and the new model was
tested against these scenarios as well but not as thoroughly as described in Section 6.4.
This indicates, as mentioned earlier, that the new model version is more adapted to
different operational situations but performs best when the refinery is at high utilization
of the refinery capacity.
Comparison of the consumption of fuel gas to the boilers between the new and original
model became difficult since, as described in Section 6.1.6 the original model used the
wrong conversion factor and model formulation. The new model version predicts the
composition of the fuel gas well. There are some deviations between the new model
50
|
Chalmers University of Technology
|
7. Optimization mode
production. However, these results are based on unrealistic energy prices, and are only
to illustrate the extremes of the solution price.
7.2 Optimization mode running from Excel interface
The Aspen Utilities Planner Excel Add-in allows the user to run the simulation and view
results from within Microsoft Excel. In previous work, the Excel workbook for steam
model simulation was created, which works properly in scenario mode. However, enabling
the optimization mode to run through the Excel interface was also desired.
Connecting the Aspen Utilities Planner interface to the Excel interface for optimization
mode simulation had been achieved and the first run in optimization mode through Excel
was performed with the same data and constraints as used in Aspen Utilities Planner and
the results are identical to Table 7.1.
It can be concluded that the Excel workbook and Aspen Utilities Planner are now in-
terlinked properly and the model can be run in optimization mode from Excel. Results
obtained from both interfaces are exactly the same for the identical data input and con-
straints. However, solving the model through the Excel interface seems to have a number
of advantages. It is more convenient and easier to use Excel since the user can design the
workbook representing the current operating conditions of the steam network and link
it to Aspen Utilities Planner. Another advantage could be that the data editor in the
Excel interface allows the user to change constraint freely without changing the original
constraints in Aspen. Consequently, the user can always test new constraints and go back
to the original ones easily.
7.3 Scenarios in optimization mode
When testing the optimization mode of Aspen Utilities Planner and the Excel interface
different scenarios were used. The same scenarios as used for the validation in Section
6.4 were used but Scenario 3 and Scenario 6 were excluded, due to the poor validation
results, the unsteady-state operation of the refinery at that moment and also due to
poor measurement value for fuel gas consumption for one of the boilers. When using the
optimization function with Scenario 6 it was discovered that the same problem as for
Scenario 10 existed, the boilers that are in operation have loads below the limits that
the optimization function uses. Due to the similarity to Scenario 10, it was decided to
omit this scenario. It was decided not to add new scenarios to the optimization since the
remainingscenariosstillrepresentedbothstableoperationandpartlyshut-downoperation
of the refinery.
7.4 Optimization results
In order to make the most accurate comparison between actual operation and the result
from the optimization, the prices of electricity and LNG at the specific time point repre-
54
|
Chalmers University of Technology
|
7. Optimization mode
sented by each scenario are used. The electricity prices as a spot market price excluding
taxes and fees were retrieved from NORDPOOL website [19] and the LNG prices were
obtained using eurostat [20], conversion to SEK/GJ was done using values from Forex
[21]. The LNG price for Scenario 6 and 10 was not available so it was estimated from the
scenarios were price data was available to 10 e/GJ. The resulting prices can be seen in
Table 7.2.
Table 7.2: Prices for electricity and LNG at the specific time for these scenarios.
Scenario Electricity [SEK/MWh] LNG [SEK/GJ]
2 84.9 102.9
10 318.1 104
8/12 276.9 98.7
7.4.1 Scenario 2
Results from using the optimization function for Scenario 2 can be seen in Table 7.3 where
a comparison of values for certain variables can be seen.
Table 7.3: Results from scenario simulation and optimization together with measured
values from the refinery, for Scenario 2.
Variable Scenario mode Optimized mode Measured Values
Cost Electricity [SEK/h] 404.4 426.5 -
Cost LNG [SEK/h] 5162 161.8 -
Total cost [SEK/h] 5566 588.4 -
LP venting [t/h] 25.5 0 29
VHP-MP valve [t/h] 23.9 8.9 23
MP-LP valve [t/h] 15.1 6.1 21.8
Total boiler production [t/h] 62 39.4 62
In Table 7.3, it can be seen that by optimizing the operation the total cost for the utility
system is estimated to decrease by 4978 SEK/h. It must be noted that firstly, the results
from optimization may not be the optimal results since when the model was run in
optimization mode, the LP venting showed negative value around -0.5 t/h then the script
meant to handle this problem described in Section 6.2 was activated. The activated script
works only when optimization results show negative flow of LP venting valve and set the
LP venting flow to be zero by adding the deficit amount of steam to one of the boiler.
As expected when the solver significantly decreases the steam production, the net change
of pumps and compressors at the VHP header is 395.7 kW switching from turbine mode
to motor mode, in total 15 out of 52 units are switched in mode. The combination of the
operational mode for the pumps and compressors suggested by the solver can be seen in
Figure 7.1.
When running the optimization solver more than one time on the same scenario after
convergence it was discovered that different solutions were obtained. In Figure 7.2, the
55
|
Chalmers University of Technology
|
7. Optimization mode
pumps and compressors settings for an alternative solution to Scenario 2 is shown. The
net change of power demand is the same 395.7 kW changing from turbine mode to motor
mode at the VHP header. The economical difference is negligible.
The reason for obtaining the different solutions with very similar operating cost can
be because of two things; the difference in values between the two solutions fall within
the error tolerance limit set in the solver or the solver got stuck in a local minimum.
Decreasing the tolerance level did not have any effect, and verifying that the solver got
stuck in a local minimum was not applicable. One indication is that there are different
ways of adjusting the operational settings of the pumps and compressors to achieve the
same (or very close to the same) reduction of utility cost. Even if this means that it is not
possible to identify one optimal way for operating the system, the utility cost suggested
by the solver can be seen as a target that can be achieved when making changes in the
operational settings of pumps and compressors. However, a number of simulations was
needed to achieve convergence. This was concluded to be because the solver got stuck
in a local minimum, the indications are that after each simulation, a greater change was
observed in the total utility cost. But after convergence, the optimizer cannot further
decrease the use of LNG since it has been already approaching zero as can be seen in
Table 7.3.
Figure 7.1: Changes of operational mode for pumps and compressors after optimization,
for Scenario 2.
56
|
Chalmers University of Technology
|
7. Optimization mode
Figure 7.2: Changes of operational mode for pumps and compressors after the second
optimization for Scenario 2.
It should also be noted that, although the net change in power demand is shifted from
turbine drive to motor, some units are switched in the other direction. The reason for
this is the fixed power load of the majority of the turbines in the system, which means
that changes in steam flows are obtained in discrete intervals. Reaching a certain steam
balances, therefore requires a mix of turbines in operation where the sum of their fixed
loads together comes as close as possible to a desired total steam flow.
7.4.2 Scenario 10
The optimization results for Scenario 10 can be seen in Table 7.4. The results show that
the total utility cost was lower in scenario simulation than in optimization. However, the
data regarding steam production from the boilers calculated in scenario mode show that
the production is below the minimum load that is constraining the optimization. If the
minimum limit of the load constraints are changed to the same values as the production
calculated in scenario mode, the optimizer provides a lower utility cost than otherwise,
as seen in the last column of Table 7.4.
57
|
Chalmers University of Technology
|
7. Optimization mode
Table 7.4: Comparison of values before and after running optimization to measured
values from Preem refinery, for Scenario 10.
Scenario Optimization Measured Adjusted minimum
Variable
mode mode values loads on boilers
Electricity cost
1552 1215 - 1257
[SEK/h]
LNG cost
15358 15878 - 15358
[SEK/h]
Total cost
16910 17094 - 16615
[SEK/h]
LP venting
23.4 25.1 14.5 22.5
[t/h]
VHP-MP valve
15.5 10.6 6.8 10.2
[t/h]
MP-LP valve
17.2 6.4 3.1 7.1
[t/h]
Total boiler
33.7 36 33.7 33.7
production [t/h]
Hence, using the original load constraints for the boilers the solver cannot find a solution
that provides a lower utility cost than the one from scenario simulation, applying a lower
bound similar to the operating value in scenario mode for each boiler, the optimizer finds
a lower cost. This shows that small adjustments of the constraints can lead to small
changes in loads that have a crucial effect on the marginal fuel consumption and thereby
the costs. It is not impossible to operate the boilers at loads lower than the minimum load
given in Table 6.2 according to Preem staff, but the general limits should be according to
Table 6.2.
In Figure 7.3, the changes in the operational mode for pumps and compressors after
optimization can be seen. For the units connected to the VHP header, the net change
in operational power demand is 593.3 kW from motor mode to turbine mode. This is as
expected since steam was in excess in this scenario and for the optimal solution steam is
better utilized. In Figure 7.4 the changes in operational mode for pumps and compressors
can be seen when using the adjusted lower minimum load limit for the boilers. In this case
the net change in operational power demand is 484.3 kW from motor to turbine mode.
Comparing the differences between the two cases it was noted that they are small and
that the solver is optimizing power demands in a similar way as in the optimization case
described in Section 7.4.1. The total number of units that changes operational mode was
13 out of 52 units in both cases.
58
|
Chalmers University of Technology
|
7. Optimization mode
Figure 7.4: Changes of operational mode for pumps and compressors after optimization,
for Scenario 10, with lower minimum limit on the steam boilers.
7.4.3 Scenario 8 and 12
Table 7.5 shows the results from optimization of Scenario 12. The utility cost decreases by
1051.6 SEK/h and also the unused steam that flows through the let down-valves decreases
compared with the solution from scenario simulation. However, the steam vented to the
atmosphere is as high as in Scenario 10. This is because there is no further value in
achieving steam savings since the boilers are operating at their minimum load capacity.
In Scenario 2, the availability of refinery gas in relation to the process steam demand is
lower compared to the other scenarios, and therefore the steam flow through the LP vent
is low, while for Scenario 10 and 12, the LP vent is relatively high.
Table 7.5: Comparison of values before and after running optimization to measured
values from Preem refinery, for Scenario 12.
Variable Scenario mode Optimization mode Measured Values
Cost Electricity [SEK/h] 1365 1243 -
Cost LNG [SEK/h] 21610 20762 -
Total Cost [SEK/h] 22975 22005 -
LP venting [t/h] 29.3 24.3 0.5
VHP-MP valve [t/h] 18.8 9.9 3.2
MP-LP valve [t/h] 13.3 7.5 7.6
Total boiler production [t/h] 40 36 40
60
|
Chalmers University of Technology
|
7. Optimization mode
In Figure 7.5, the changes in operational mode for pumps and compressors can be seen.
The net change for units connected to the VHP header is 349 kW of power switched from
motor to turbine drive, this is expected since the LNG price is low and the electricity
price is high. There are only a few pumps settings change but the units that are involved
are high power demanding units.
Figure 7.5: Changes of operational mode for pumps and compressors after optimization,
for Scenario 12.
In Table 7.6, the results for Scenario 8 can be seen. The results are, as expected, similar
to the results from Scenario 12, the small differences that exist are assumed to come from
the different input values.
Table 7.6: Comparison of values before and after running optimization to measured
values from Preem refinery, for Scenario 8.
Variable Scenario mode Optimization mode Measured Values
Cost Electricity [SEK/h] 1244 1240 -
Cost LNG [SEK/h] 12347 10244 -
Total Cost [SEK/h] 13591 11484 -
LP venting [t/h] 32.6 22.1 1
VHP-MP valve [t/h] 18.2 10.2 3.9
MP-LP valve [t/h] 12.8 7.8 6.9
Total boiler production [t/h] 45.1 36 45.1
In Figure 7.6, the changes in the operational mode for pumps and compressors can be
61
|
Chalmers University of Technology
|
8
Using the model as a decision
support tool
This chapter, discusses how to use the new model from the Excel interface as well the
aspects to investigate further when obtaining deviating results are described.
8.1 Scenario mode
The purpose of using the simulation model in scenario mode is to see how well the model
reflects the real situation at the refinery at a specific time. If the results from scenario
mode simulation are not accurate, the model cannot be used in optimization mode.
The first important step when using the simulationtool is to understand what variables
to investigate and how to prioritize them when the model is not accurate. Thus, after the
simulation has finished, the user should go to ’Validation’ spreadsheet which compares
calculatedvaluesofselectedfreevariableswithmeasurementvaluesobtainedfromPreem’s
system. Table 8.1 shows a typical data validation sheet from the Excel interface.
Table 8.1: Validation table for checking accuracy.
Values within the blue and red rectangles are the values for steam system and fuel gas
system, respectively. In the blue rectangle, the first two rows are the total feedwater and
freshwater make-up to the steam system. The next three rows are the steam flows from
63
|
Chalmers University of Technology
|
8. Using the model as a decision support tool
venting valve for LP and VHP level headers. These flows are the flow going out of the
system. The rest of the rows within the blue rectangle are the steam flow through valves
and turbines between each header. Overall, if these water and steam flow values from
the model and the measurement are within the limits of Equation 5.2 and has a absolute
error less than 5 t/h, it can be concluded that the model reflects reality for steam system
for the simulated scenario.
On the other hand, the red rectangle contains values from the fuel gas system which are
the total fuel gas used and the LNG percentage in the fuel gas. It should be noted that
if there is a deviation between measured and model values for the total fuel gas used,
this deviation comes from the fuel flow to the boilers only since the rest of the flow is
set to a fixed value representing others fuel gas consumers. The mismatch of this value
corresponds to the error in LNG percentage as well. This error is expected to come from
the molecular weight of the fuel gas in the model which is set to 35 kg/kmole, which is
not measured at the refinery. One should carefully re-check the molecular weight of the
fuel gas for the specific scenario before using the model and if possible, the LHV should
also be checked. However, since the fuel gas system and the steam system are modelled
separately, the error from each part does not affect the other part’s accuracy in scenario
mode.
However, in optimization mode, an error in the fuel balance could affect the optimal
solution if the LNG share is close to 0%. In such cases the optimal solution is likely to
involve reduction of the fuel flow to the boilers until the LNG share is equal to zero. The
point at which this occurs is highly dependent on the modelled fuel gas balances, and
thereby affected by errors in the fuel gas model.
VariablesthatusuallydeviatefromthemeasuredvalueareLPventing,steamflowthrough
VHP-MP vent and steam flow through MP-LP vent. Sources for these deviations regard-
ing these variables can be found at different places. Firstly, the deviation can come from
the internal production and consumption of steam at the overhead header, so one should
compare the steam flows for theses units to the valve opening. Since the valve opening
percent is more reliable, if the steam flow value seems unreasonable, a regression equation
for the flow based on the valve opening should be used to predict the amount of steam
flow through the valve instead. Secondly, the deviation can come from an averaging usage
of the pumps and compressors, which will be further explained in the coming paragraph.
Deviations in the fuel gas system are usually connected to the production of the boilers,
the LHV and molecular weight of the refinery gas. Some of these investigations requires
access to the refinery data base which is not always possible. So, it is important to collect
data as much as possible and also have data for checking these variables available.
When extracting data, consideration for averaging must be taken. As can be seen from
Scenarios 6 and 10, there is a difference between using a daily or weekly average, and also
other periods can of course be used. When averaging there are a number of variables to
pay extra attention to. Cross reference of steam flows and pump and compressor settings
is important. Steam production can peak and for a short while a high pump power can
be required. The peak in steam production can effect the average quite much while a
64
|
Chalmers University of Technology
|
8. Using the model as a decision support tool
small time of operation of the pump will have a small affect in averaging. Thus deviating
results can originate from the averaging, especially from averaging the operational setting
of pumps and compressors.
8.2 Optimization mode
Prior to optimization, the users need to check all the constraints in the ’Demand’, ’Avail-
ability’ and ’Energy Cost Summary’. For example, the demands and supplies of steam
for each unit need to be specified for the scenario to be optimized. To reduce error risk
when entering all the constraints, the prepared spreadsheets built by Gunnarsson and
Kobjaroenkun are set to automatically update when changing the scenario. The only
spreadsheet that users need to change is ’Energy cost summary’, which contains the elec-
tricity price and LNG price for the solver to optimize the results. Table 8.2 shows the
Excelsheetwheretheuserneedstocorrectthepriceforeachscenariobeforeoptimization.
Table 8.2: Energy prices in Energy Cost Summary.
According to Table 8.2, the two red marks are the cells containing prices with the elec-
tricity price in the unit of [kr/MWh] and the LNG price in the unit of [kr/GJ]. The
optimizer approaches the optimal results by evaluating the operating costs and then pro-
poses possible operating conditions for the boilers, pumps and compressors according to
the constraints.
Furthermore, when utilizing the optimization function it is important to keep in mind the
load constraints of the boilers. If the actual operations of the boilers have a load lower
than the total minimum load applied in the constraint then it is possible that the solver
might not find a solution that provides a lower utility cost. If such a case occurs, the
users need to reduce the minimum load of the operated boilers to the actual operating
value obtained from Preems system by editing in ’Availability’ spreadsheet. Also it can
be necessary to run the optimization function more than twice, since the solver usually
converges when either total minimum steam production at the boilers is reached or when
the import of LNG approaches zero, thus meaning that if minimum load for the boilers in
operation is not reached at the first simulation more simulations is needed for convergence
and similar as the flow of LNG approaches zero.
Additionally, it should be ascertained that the solver can only adjust the setting of pumps
and compressors that are considered as possible to switch. This can be edited within the
65
|
Chalmers University of Technology
|
8. Using the model as a decision support tool
’Availability’ spreadsheet. Having too many units as "Available" can achieve results in
which an unrealistic amount of units are changed, thus the user should set the units
whose effects are to be investigated as "Available" and the remaining units as either "Must
Be On" or "Not Available", depending on the operational mode.
It is also important to keep in mind what kind of operational scenario is investigated; if
the refinery is partly shut-down then maybe the power demands of the pumps are not
accurate. A shut-down can decrease the power demand for pumps and compressors to
75% of maximum capacity, thus affecting the steam flows. These power demands are also
of importance since the solver may change a number of units only to gain a small net
change of power, thus if the power demands do not have correct values then the changes
suggested by the solver will not be accurate. The changes in power demands is applicable
also when using the model in scenario mode.
It should be noted that it is not always possible to obtain realistic values, especially for
the LP-vent valve. After optimization, the steam vented to LP-vent valve can become
negative. Control of the steam flow through the LP-vent valve is of importance, if this
value becomes negative then the script described in Section 6.2 should be activated, this
means that the negative flow of steam will be added to the steam production at the boilers
and the steam let to the atmosphere will be positive and close to zero, the solution is not
optimal but still provides a lower utility cost than the actual operational situation.
When investigating the results from the optimization solver, a closer look on the steam
flows through the let-down valves is recommended. The low steam flows through these
valves indicate that steam is efficiently utilized and overproduction is small. If there are
large flows of steam through the let-down valves then a closer investigation of pumps and
compressors at the header in question is appropriate and also a control of the boilers.
66
|
Chalmers University of Technology
|
9
Summarizing discussion
In this chapter, a summary of the strengths and limitations of the new version of the
steam system model for Preemraff Lysekil is presented as well as some suggestions for
further developments that could improve the model even further.
9.1 Improvements, strengths and limitations
Improvement of model parameters, including process steam flows have focused on param-
eters that are of significant importance for the steam system and on the mass balances.
The goal was to construct a model that will be within the desired error limit for differ-
ent operational situations. Furthermore, the use and extraction of results of the model
through the Excel interface has been eased significantly.
The main improvements to steam system variables and process steam flows are presented
in Chapter 6. These changes have made the model better representative of real operating
conditions and constraints. For example, the amount of the refinery gas flow cannot be
reduced further. While previously, the marginal change in fuel gas consumption had to be
translated to a change in LNG consumption outside of the model, this is now internalized
in the main steam system model.
A change in the feed water temperature in the model has a large impact on the fuel gas
system and the boilers. This change together with the adjustments of the fuel gas system
made the need for constant values other than the efficiency unnecessary. For example,
the performance factor which was used as a tuning parameter in the original model has
been removed and set to the default value. Also decreasing the number of fixed variables
and replacing them with confirmed system conditions is considered as an improvement
that makes it easier to understand and interpret the model parameters.
The changes connected to pumps and compressors are more of the tuning kind, the addi-
tion of the by-pass flow concerns quite a small flow of steam compared to the production
ofsteamateachheaderbutisaconfirmedflowthathasnotbeenaccountedforandcanbe
seen as marginal fine tuning. Larger effects from changes of the pumps and compressors
come from removal of power demands connected to pumps that are usually not in oper-
ation or have more than two operational alternatives as described in Section 6.1.3. This
change concerns large power demands pumps such as PT-3202B (640 kW) and PT-2307B
(363 kW). The inclusion of these power demands could have been acceptable in the model
if making them "Not Available", in which case they would not affect the steam system.
67
|
Chalmers University of Technology
|
9. Summarizing discussion
However, in that case, they would imply an electrical consumption instead, something
that would not affect the optimal solution, but its value due to the incorrect calculation
is electricity costs.
Process steam flows that were calculated incorrectly in the original model have now been
corrected or been given a more updated value according to Preem staff. The steam
consumption decreased when these corrections were implemented, but there were no more
known demands of steam, thus as described in Section 5.5.1 an unspecified consumption
of steam was added to the model and also additional undefined flows of steam between the
headers were added. This is not an ideal approach but there are no more measurements
of steam flows. Furthermore, it is known that there is consumption of steam that is not
measured. Thus in the new model version steam is not referred to the wrong consumer,
and the unmeasured steam consumption more clearly works as a tuning parameter.
All work with the model can be handled from the Excel interface. This will decrease
the risk of error due to handling since Excel is more well-known by Preem staff. Import
of data for running simulations is also done through Excel and the interface is built up
so that it easy and convenient to copy and paste the required data between the sheets.
There are a number of steps to keep in mind but it is still more effective and user friendly
than working either from both Aspen Utilities Planner and Excel at the same time or
only Aspen Utilities Planner.
The validation results show that the model performs well during stable operational sit-
uations, i.e. when there are no parts of the refinery that are shut-down, and no major
transitions between different operating modes. The tables in Section 6.4 showing the er-
rors at the headers and the let-down valves supports this as the trend is that the errors
increases for the scenario with areas shut-down. The decrease in performance is assumed
to be connected to the degassing of process equipment during shut-down periods. Since
during this process the flow meters for the steam are by-passed, and it is difficult to
estimate how much steam is consumed by each area unit.
Results from the optimization function shows that the optimizer works as expected. The
utility cost decreases compared to scenario mode. However, the solution from an opti-
mization depends strongly on a few important constraints in the model, especially the
minimum load of the steam boilers. Consequently, it is important to remember that if
the operational situation shows that the boilers, for example produces less steam than
the minimum load with the specific configuration of boilers then the constraints should
be changed so that the economic comparison is on the same premise. For the scenarios
investigated in Section 7.4, the optimization model had many pumps and compressors in
"Available" mode. This is the reason why the solver changes a lot of pumps and compres-
sors. In practice, more units should probably be set as "Must Be On", thus the solver will
only work with a handful of pumps and compressors and the decrease in the utility cost
will probably decrease. However, it would be more realistic to change the operational
mode for only 3-4 units instead of around 15 as was suggested in some case in Section 7.4.
68
|
Chalmers University of Technology
|
9. Summarizing discussion
9.2 Further developments
Future work regarding this model should focus on the operational situations when parts of
the refinery are shut-down. For these situations, the model results deviates the most from
measured values. However, this is also when the data is less accurate since the refinery
decreases the production. Furthermore during shut-down scenarios the power demands of
the pumps and compressors should be investigated. It is possible that they are working
at lower capacity rates, while as it is now the model assumes close to full load also during
shut-down scenarios.
Another development would be to specify uncertain steam consumers such as steam trac-
ing and also get an idea of leakages and small steam flows between the headers, this would
decrease the values for unknown steam consumers and steam flows between the headers
which would make the model more reliable.
Other than the developments on the steam system, it would be good to further investi-
gate the fuel gas system part. In the model, there are only three boilers and other fuel
consumers that connect to the fuel gas system and since they were modelled by fixing
the amount of steam generated, therefore further modifications on fuel gas system would
not influence the accuracy on steam system as a whole. Due to the limitation of the
program, the density of gas cannot be entered directly to model the fuel gas system but
instead the molecular weight and LHV of the fuel gas are needed. The current situation
for the fuel gas system is that the molecular weight of the fuel gas fed to the boilers is
not measured and the current value in the model is 35 kg/kmole according to Subiaco
[5]. A small change in the molecular weight of the fuel gas highly affects both the fuel
gas flows to the boilers and the proportion of LNG in the fuel gas. Thus, the accuracy
of fuel gas system can be improved by tagging the molecular weight of fuel gas carefully.
There are some properties needed in order to obtain the correct molecular weight of the
refinery gas which are; the composition of the imported LNG, pressure and temperature
for both LNG and refinery gas. With these properties density and conversion factors for
flows can be found and since the flow of mixed LNG and refinery gas is measured and by
removing the imported LNG and calculate the mass flow of the different components in
the refinery gas the molecular weight can be found.
Another factor that greatly affects the fuel gas flows is the LHV of the fuel gas itself.
Since in the model, only the mass basis LHV can be used, but in reality the LHV of the
fuel gas is measured in volume basis and is calculated by using the density of the fuel gas
to convert the unit. But the density of the fuel gas can sometimes go down to even 0.5
kg/m3 according to the measured tag value and that is considered unreasonably low. If
the unrealistically low value of the density of the fuel gas is used to calculate the LHV
in mass basis, LHV of the fuel gas will become unrealistically high and cause a large
deviation in the amount of fuel gas flow to the boilers. If such a situation takes place,
one should further investigate what actually is a value of LHV at that specific moment.
69
|
Chalmers University of Technology
|
10
Conclusion
In this master thesis, a model of the steam utility system of the Preem refinery in Lysekil
has been further improved and developed. Model assumptions, parameter, and functions
concerning equipment in the steam network, steam consumption and production, and the
fuelgassystemhavebeeninvestigated. Furthermore,themodelhasbeenvalidatedagainst
new data scenarios extracted from Preem’s database and an extensive development of the
Excel user interface has been done.
Validation results for the latest steam model version show that the steam model and the
fuel gas system have become more reliable during stable and full production operation of
the refinery. The model can be solved in optimization mode for which the results provide
lowered utility cost for the tested operational scenarios. An improved Excel user interface
can be used to run the model in both scenario and optimization modes. Moreover, current
operating conditions can be conveniently imported to the interface and simulated. The
model user guide has been provided the description on how to import data, to run the
model and to interpret the results.
The model can be used to predict how changes in LNG and electricity price influences
the operation of the steam system, i.e. how could could the steam system including
the operational setting of pumps and compressors be operated during for example high
electricity price time. Other use of the model is to investigate operational changes i.e.
without testing in reality. The optimization function can be used to observe small changes
in the system, only changing one or two pumps but also situations when several changes
between motor and turbine mode is needed. In research areas the model can be used to
observe how increase and/or decrease in steam production/consumption affects the utility
cost, these changes can be results from for example retrofits of heat exchanger networks
or expansion of the refinery.
71
|
Chalmers University of Technology
|
A
Running optimization mode through
Excel interface
This appendix briefly explains how to perform optimization mode simulation through
Excel interface. With the use of add-ins function in Microsoft Excel, ’Utilities340’ allows
the simulation to be run both in Scenario mode and optimization mode through Excel.
The following steps briefly describe how to open the Excel file with a connection to Aspen
Utilities Planner:
1. Open up the Microsoft Excel file named STEAM.MODEL_LYSEKIL_Final
2. Go to the installation drive for Aspen Utilities planner and open up utilities340.xla
to enable macro.
Default location: ProgramFiles\AspenTech\Aspen Utilities Planner V8.8\bin
3. ClickonAspenUtilitiesintheADD-INSmenubarthenselect’OpenAspenUtilities’
then choose Aspen Utilities Planner file STEAM.MODEL_LYSEKIL_Final
4. SelectShowAspenUtilitiesiftheuserwantstoseeAspenUtilitiesPlannerinterface.
At this stage, the Excel file with a connection to Aspen Utilities Planner interface is ready
to be used for Scenario mode simulation. The next steps describe how the optimization
mode can be performed in this model:
1. Click on ’Aspen Utilities’, on the list choose ’Editors’ under ’Optimization’ as can
be seen in Figure A.1.
Figure A.1: Retrieving constraints from Aspen Utilities Planner to Excel
2. The program will ask if the user want to create the new data sheet containing
I
|
Chalmers University of Technology
|
Predictive Longitudinal Control of Heavy-Duty Vehicles Using a Novel Genetic
Algorithm and Road Topography Data
FREDRIK HOXELL
Department of Applied Mechanics
Chalmers University of Technology
Abstract
Fuel costs account for approximately one third of the total cost of haulage con-
tractors. This makes it very lucrative from both the contractors’ and hence Scanias’
perspectivetoreducethevehicles’fuelconsumption. Withlimitedpower-to-massra-
tio of heavy-duty vehicles, anticipatory control is crucial for fuel- and time-efficient
manoeuvring. Solutions addressing this problem are already in production, but
with ever-increasing system complexity the usefulness of conventional mathematical
methodsissuffering. Asanalternativeapproach, thisthesisisaimedatinvestigating
the applicability of a real-time genetic algorithm (GA) to the domain of longitudi-
nal control of heavy-duty vehicles for fuel-saving adaption to road topography data.
Known to be computationally heavy, an as lightweight as possible algorithm is de-
veloped and aimed at optimising the engine torque by model predictive control. The
final algorithm uses a vehicle prediction model of fuel-consumption data including
a gear prediction model. Validated through simulation this novel approach displays
a clear improvement over a similar MPC-controller utilising a QP-solver and a cost
function similar to that of the GA.
Keywords: Adaptive, Look-ahead, Cruise Control, Genetic Algorithm, Quadratic
Programming, Heavy-Duty Vehicles, Model Predictive Control
v
|
Chalmers University of Technology
|
1
Introduction
Scania has a central role in the development of safer and more sustainable commer-
cial transports. Today Scania offers driver assistance solutions such as Advanced
Emergency Braking and Look-Ahead Cruise Control (LACC), while research is con-
ducted in areas such as platooning and autonomous driving in traffic jams.
Theconductedresearchindicatesthatnewtechnologicalsolutionshavethepotential
to lower fuel consumption by 15% (e.g. platooning) and, for autonomous driving
in traffic jams, this figure could be as high as 18% [1, 2]. In addition to improved
fuel economy and thus reduced environmental impact, vehicles capable of switching
into a mode of autonomous driving could increase the efficiency of the driver and
reduces the risk of human errors.
There will be some time before fully automated vehicles reach the market, and cur-
rently there is a continuous transition happening in which the vehicles are step- or
functionality-wise augmented as subsystems are being automated and in many cases
more interconnected. One such system is cruise control, which for many years has
been a widely implemented driver assistance system that aims to keep a constant
cruise speed. This system, however, is challenged by the more recent adaptive cruise
control (ACC). In cars, this generally means adaption to the speed of the vehicle
ahead while keeping a safe distance [3]. For heavy-duty vehicles on the other hand,
the limited motor power and potentially heavy load pronouncedly limits the speed
and acceleration of the vehicle, making it highly desirable to add the ability to plan
ahead in time and use road gradient information to utilise gravity and predict de-
manding ascents, streamlining the conversion between potential and kinetic energy.
Thisbecomesevenmoreprofoundinthecaseofplatooningofheterogeneousvehicles
[4].
To this end, previous work has been conducted in the field of LACC (e.g. [5,
6]). In both papers the proposed method is dynamic programming for solving the
optimisation problem with respect to time and fuel consumption. In [5] it is shown
thatthedevelopedalgorithmisabletorunonanembeddedsystemratedat200MHz
and with 32Mb of RAM. However, none of these methods are implemented in Scania
vehicles. Instead, Scania Active Prediction (see [7]) is the system that is currently
offered to customers; a look-ahead cruise control that is based on other methods.
This system has proved to improve the fuel efficiency of heavy-duty vehicles (HDVs),
thus potentially implying that there may be even more to gain by increasing its level
of adaptivity and using control signals with different characteristics.
1
|
Chalmers University of Technology
|
1. Introduction
1.1 Purpose
Due to the effects of limited power-to-mass ratio of HDVs on the dynamics of such
vehicles, the fuel efficiency can be improved by optimising the engine control with
respect to fuel consumption by using information about upcoming road topography,
typically 1-10kilometers into the future.
The arising optimal control problem has been solved with a range of techniques,
but in vehicular applications most traditional methods fail due to their need of
processing power and memory, which in general cannot be met by electronic control
units (ECUs) currently in production. Furthermore, these methods’ reliance on
mathematically stringency often require simple models and/or approximations to
be made.
Whereas mathematical optimisation techniques, and especially dynamic program-
ming,havebeenapplied,therehasbeenanupsurgeintheapplicationofevolutionary
algorithms [8]. Research has been conducted within the field of evolutionary algo-
rithms (EAs) for path planning ([9, 10]), but little or no research has been aimed
at investigating the applicability of the algorithms to longitudinal control when re-
stricted by efficiency and time constraints.
The main purpose of the thesis is to enter this previously unexplored field by inves-
tigating if genetic algorithms (GAs) can successfully be applied to a control problem
of this nature. The problem may on a higher level be described as adapting the driv-
ing style to the road topography so that fuel consumption will be minimised without
compromising the time efficiency. Although the investigated solution is applied to
a problem that is already addressed in production software, the ultimate purpose it
not simply to replace the existing solutions, but to investigate what potential lies in
the application of genetic algorithms for longitudinal HDV control.
1.2 Specification of the purpose
The main objective of this thesis is to propose a genetic algorithm based controller
for on-line fuel consumption optimisation via engine control in the HDV industry.
An attempt is made to bring inspiration from genetic algorithms and soft computing
into the field of on-line optimal control.
The relevant parameters describing the vehicle states are known to the algorithm,
as are the vehicle model required to predict the vehicle’s longitudinal dynamics and
fuel consumption. The objective of the algorithm (also referred to as the solver) is
to optimise the engine torque output with respect to time and fuel-consumption,
subject to a set of constraints and reference values used to ensure driver comfort
and speed limits among others.
2
|
Chalmers University of Technology
|
1. Introduction
1.2.1 Delimitations
For future automated trucks to be able to offer at least the same fuel efficiency as
that of experienced drivers, the cruise control system must be able to adapt to the
driver profiles of the vehicles within some distance from the ego-vehicle; both in
the case of platooning but also in normal driving mode. However, this adaption
to surrounding traffic does not fall within the scope of the project. The algorithm
will thus not take potential fuel-savings associated with trailing other vehicles into
account. Therefore, it is assumed that the vehicle travels on a highway or rural
road with non-dense traffic, implying that interference from surrounding vehicles is
at a minimum. Furthermore, the algorithm should neither take into account the
curvature of the road nor lane changes or overtakings.
To fully optimise the speed profile of the vehicle with respect to efficiency and
time, there is an imminent need to gain control over the gearbox, engine and brakes,
amongstothers. Thisishamperedbythecurrentarchitectureofthecommunication-
and control systems of Scania vehicles. Therefore, also the restriction that the
planner to be developed is limited to controlling the engine is included.
1.3 Method
In the initial phase of the thesis, an in-depth literature study was conducted. Pre-
vious work within the field of look-ahead control and the closely related field of
trajectory planning was studied to identify the strengths and weaknesses of various
approaches. This study was supplemented by discussions with professionals within
the area and the general direction of the project and solution could be decided.
As for the main part of the project, a simulation- and evaluation environment was
developed along with the control algorithm. The modules were created as indepen-
dent of each other as possible to facilitate the porting of the algorithm to different
environments1. The purpose of the simulation module was to serve as a rapid-
prototyping environment during the algorithm development.
The development of the algorithm and the framework was divided into cycles. Each
cycle delivered working software but, more significantly, the various modules were
evolved as more Scania-internal data information were made available in the later
cycles.
As the algorithm approached its final form it was tuned and tested in a more ex-
tensive simulation environment including both theoretical formulae and in part also
vehicle data collected from measurements. In its final form, the algorithm was also
evaluated using this framework.
1e.g. Simulink models or StateFlow charts
3
|
Chalmers University of Technology
|
1. Introduction
1.4 Report outline
1. (Introduction)
2. Background and previous work - In this section some of the ideas from
previous studies, upon which parts of this project are based, are presented.
This thesis being a novel approach, a range of studies and applications are
presented in an attempt to convey the core ideas of stochastic optimisation
methods and what they can add to the field of (classical) optimisation.
3. Heavy-duty vehicle prediction model - Having the controller to be de-
veloped rely on state predictions of heavy-duty vehicles, this chapter is aimed
at developing the required prediction models. The longitudinal dynamics of
heavy-duty vehicles are addressed and presented along with motivated approx-
imations.
4. Model predictive control - The core principles of the controller are pre-
sented with reference to the extensively employed method of model predictive
control. A simplified version of the problem solved by the final algorithm is
formulated in terms of two common classical optimisation methods; linear and
quadratic programming.
5. Genetic algorithms-Themainalgorithmofthisthesisispresentedfromthe
bottom up. A range of operators are presented along with reference to findings
in previous studies, leading up to the final form of the genetic algorithm used
in the controller.
6. Hybrid algorithm - As this thesis makes use of multiple solvers, the final
solver is termed hybrid algorithm. In this section the structure of this hybridi-
sation is presented.
7. Algorithm evaluation - Here the method of algorithm evaluation is ad-
dressed. It explains how the results were generated and includes an abstracted
illustration of the simulation model developed in this thesis.
8. Results - Results generated through simulations are presented. This section
contains results aiming to evaluate the fuel-saving potential of the algorithms,
but also are results regarding computational time and algorithm predictability
presented.
9. Discussion
10. Conclusions
11. Future work
4
|
Chalmers University of Technology
|
2
Background and previous work
Optimisation is a field that has an almost infinite number of applications, spanning
a tremendously wide range of scientific fields. The first section addresses this area
from a point of view that serves as one of the main sources of inspiration for the
contentsofthisprojectandmovesontothehumanadditiontooptimisation,whichis
concretised by applications in the automotive industry. Finally, important scientific
results and issues are presented, which have served as motivation and/or inspiration
for the choices that have been made in this thesis.
2.1 Evolutionary optimality and the human addi-
tion
The problem of optimisation is an ancient issue. Indeed, these types of problems
have even been an integral part in the evolution. The concept of survival of the
fittest may in many senses be translated to survival of the most optimal. Not only
has evolution acted as a force of optimisation, but there are also obvious signs that
animalscanperformsomekindsofoptimisation(e.g. learnapolicy)tomaximisethe
return1 of moving from one state to another 2. Unlike many methods of optimisation
that are widely used today, a very central part of the optimisation found in nature
is adaption.
A clear human addition to the field of solving optimisation problems is the highly
systematicapproach. Themostwidelyadoptedtoolisofcoursemathematics. There
is a vast set of strictly mathematical optimisation techniques employed to find some
optimum of a mathematical function, possibly under a set of constraints. Their
widespread use alone indicates that the mathematical treatment of optimisation
problems has certainly been fruitful. A prerequisite of the purely mathematical
models is that the problem must be defined in terms of mathematics as well. In the
case of systems, a mathematical model is often desirable since it enables the use of
a wide range of methods of mathematical analysis. This is thoroughly exemplified
by the almost countless number of studies performed within mathematical optimi-
1”Maximisation of return” could mean, for example, minimising the effort of moving from one
point to another, or maximising the amount of food found while foraging.
2The terms ”policy”, ”return” and ”states” are taken from the field of reinforcement learning.
Within this field, a policy is equivalent to a decision-making rule [11].
5
|
Chalmers University of Technology
|
2. Background and previous work
sation, the huge amount of literature on the subject, and not to mention today’s
implementation of Active Prediction. Evidently, the human addition to the field of
optimisation is quite distinguished from that of nature, but both methods have their
strengths and share the characteristic of performing a directed search.
2.2 Optimality in the vehicle industry
Optimality can mean different things and can vary widely depending on constraints.
A common meaning of optimality is maximum efficiency (e.g. energy efficiency,
cost efficiency, or time efficiency). Typically prominent actors are vehicle OEMs,
but they are by no means the only ones. In the case of vehicles, there are various
approachesto theproblemof improvingefficiency. Restricted tofuelefficiency, there
are coarsely put two groups of measures; (1) improve the efficiency of the vehicle
(e.g. minimise energy losses in the engine, reduce drag, reduce friction) and (2)
improve the operation of the vehicle. The latter has a rather wide span, but a
relevant part for this thesis is that of Advanced Driver Assistance Systems (ADAS).
Although these kinds of systems have not fully penetrated the market and often are
considered as premium-options, much research effort is being put into developing
new systems. Examples are adaptive cruise control, lane-keeping assist, Advanced
Emergency Braking and automatic parking. These systems aim to improve traffic
safety, improve efficiency, relieve the driver, and/or improve the driving experience.
A possible and certainly sought outcome for the future is that these systems will be
able to fully replace the driver.
Some of the systems are intended to take over some of the driver’s tasks or improve
the awareness of the driver. However, a second set of systems is aimed at purely
enhancing the driving in ways that even the most experienced drivers could not.
Examples of such systems are map-enhanced or map-enabled ADAS, where map
data is utilised when available or is a strict necessity for the function of the system,
respectively. The system could then adapt to a particularly demanding part of the
road topography even before the driver is aware of that specific road segment 3 [12].
2.3 Dynamic programming and the curse of di-
mensionality
ProfessorRichardBellmanisthefatherofdynamicprogramming. Inthetimeperiod
1948-1952heformedthefoundationofatheorythatisstillusedextensivelytodayin
various optimisation problems [13]. In short, the idea is to trade time complexity of
algorithms for increased memory complexity. This is done by subdividing a problem
into smaller parts, called stages, solving them one at a time. After a one-stage
solution has been found, the next stage is included in the optimisation problem,
3Of course, a driver familiar with the road can also prepare for this demanding segment, but
that is a special case, especially for transportation over long distances.
6
|
Chalmers University of Technology
|
2. Background and previous work
and so the problem is solved as a sequence of one-stage optimisation problems.
Three main characteristics of a dynamic programming problem are that it should
lend itself to division into stages, have states, and require recursive optimisation.
The stages are required in order to subdivide the problem, while the states should
contain the necessary information about the implications of the current decision for
the future actions. Lastly, a prerequisite for applying dynamic programming is that
the optimal policy satisfies the principle of optimality, which may be stated as:
Any optimal policy satisfies the condition that regardless of the current
state and decision, the remaining decisions must yield an optimal policy
with respect to the state that is reached as a consequence of the current
decision.
In general, the application of dynamic programming to a problem requires much
thought and ingenuity in order to define the problem on the appropriate form. A
very intuitive example, on the other hand, is that of the shortest path problem, or
the closely related problem of finding the fastest path during rush hour [14, 15]. In
those cases the stage may be represented by the number of blocks you are from your
goal, while the state is represented by what intersection the traveller is at.
At a more concrete level, dynamic programming has for example been employed in
optimisation of hybrid powertrains in [16]. As a characteristic of dynamic program-
ming, the authors focus on the optimisation of the driving cycle of vehicles equipped
with more than one power source, in this case a hybrid electrical vehicle. Other ap-
plications of dynamical programming are the problem of dividing a paragraph into
lines of approximately equal length as discussed in [17], inferring batting conditions
in cricket [18], and, what has been the subject of many theses and research projects,
longitudinal control of heavy-duty vehicles (see for example [4, 5, 19, 20]).
Focusing on the latter example of applications, dynamic programming proved to
be conceptually fruitful, albeit not fit for real-time on-board operation in all cases.
In the one case where it was, lots of effort was put into researching suitable ap-
proximations and shortcuts in the algorithm, requiring extensive knowledge of the
optimisation problem. Similarly, in order to keep the memory requirements within
reasonable limits, the authors have made conscious decisions in designing the al-
gorithms. The latter is a consequence of what often is referred to as the curse of
dimensionality, meaning that an inherent property of dynamic programming is that
the memory requirements grow out of hands very quickly when there are more than
only a few state variables and the problem is of moderate size4.
4An exact upper limit on the number of state variables and problem size for dynamic program-
ming to be useful is very difficult to define since it is highly dependent on the resources allocated
for the computations, but also because many workarounds have been developed, which are not
necessarily universally applicable.
7
|
Chalmers University of Technology
|
2. Background and previous work
System complexity
Manageable system complexity
Catastrophe gap
Time
Figure 2.1: A graphical illustration of actual system complexity and the human
ability to handle complex systems through history. What should be specifically
noted is the ever-growing gap separating the lines. Finally, it should be remarked
that the axes are left blank as the graph is only a conceptual illustration.
estimation), their initial field of application was static optimisation problems. Ac-
cording to the authors of [25], it was in the late 80’s or early 90’s that GAs were first
considered interesting for application to optimal control problems. Thus, this means
that the applications have matured over a period of just under 30 years. Also, as
there has been a constant increase in accessibility of computational power over time,
new areas of applications have emerged naturally. As a result, genetic algorithms
are no longer restricted to static problems and are extensively covered in literature.
For example, in [22] the authors consider GAs as viable and intelligent solvers for
computationally expensive problems and, serving as one of many examples, the au-
thors of [27] dive into the field of multi-objective optimisation from the perspective
of GAs. Although this thesis does not include multi-objective optimisation in the
strict meaning, it is certainly of relevance for vehicle control.
Aconsequenceofmoreefficientcomputersisdecreasingcomputersizeaswellasprice
drops. This opens up for implementing genetic algorithms in systems where price,
size and/or weight are limiting factors (e.g. vehicles, airborne systems or systems
in mass production). In an investigatory study the authors of [28] implemented a
Nondominated Sorting Genetic Algorithm (NSGA-II) in a 180MHz microcontroller.
Specifically, the authors conclude that the application of the developed algorithm
to real-time vehicle control is successful and refers to the solution architecture as a
viable option for ADAS implementations.
Summed up, genetic algorithms have been thoroughly studied and applied to dy-
namic optimisation problems of various kinds, most of which have no direct connec-
tion to longitudinal vehicle control. However, despite the problem formulations not
being the same, the conceptual ideas of the previous studies form a firm foundation
9
ytixelpmoC
|
Chalmers University of Technology
|
3
Heavy-duty vehicle prediction
model
As the state of a heavy-duty vehicle at a specified position may depend on the
state and control signals of the truck several kilometres back, the algorithm that is
developed in this thesis relies on making predictions about future states and control
signals. The details are left for chapters 4 and 6, but suffice to say that in order to
predict the state of the vehicle, a model must be developed. Furthermore, as the
fuel consumption is a direct measure of the success and usefulness of the algorithm,
both the longitudinal dynamics and fuel consumption properties of the vehicle must
be considered. This chapter is dedicated to developing these models. Specifically,
in Section 3.1 a fuel consumption model with low online computational complexity
is presented, while Section 3.3 proposes a realistic, yet simplified, propulsion model
whose main characteristics are captured in a required simplification developed in
Section 3.4. The chapter also presents real data for Scania engines, but all data has
been considerably corrupted and scaled to unity to enforce company secrecy. The
most fundamental data for this chapter is a 3D map of the fuelling as a function of
engine speed and torque and is presented in figure 3.1.
1
0.9
1
0.8
0.8
0.7
0.6 0.6
0.5
0.4
0.4
0.3 0.2
0.2
0
1
0.1 1
0.5 0.5
0 Torque 0 0
Engine speed
Figure 3.1: A typical map of the fuel flow as a function of torque and engine speed.
Note that the data has been corrupted.
11
gnilleuF
|
Chalmers University of Technology
|
3. Heavy-duty vehicle prediction model
3.1 Fuel consumption
To describe the truck, a state vector x = [v,s,G] is used, where the speed of the
truck is denoted v, s is the distance from the reference point, and G is the engaged
gear. The basic control signals of a basic propulsion system exposed to the driver or
control system are throttle, brake and gear. However, in this thesis, gear selection
is assumed inaccessible for the control system to be developed. The control signals
that are available to the system are presented in table 3.1.
Table 3.1: Control signals available to the control system.
Variable Signal Unit
u Fuelling g/min
f
u Brake Nm
b
The engine output torque τ depends both on the fuelling and the engine speed. As
e
found in [5], the dependence is almost linear:
τ (ω ,u ) = e ω +e u +e , (3.1)
e e f 1 e 2 f 3
where ω is the engine speed and u is the fuelling.
e f
Although this may capture the coarse characteristics, it is seen from figure 3.2 that
there are clear deviations. The graphs are generated by finding the coefficients
e , i = 1,2..., in
i
τ (ω ,u ) = e ω +e u +e u ω +e u2 +e ω2 +e ω3 +e (3.2)
e e f 1 1 2 f 3 f e 4 f 5 e 6 e 7
that minimises the squared difference at the sampling points.
Even when including some of the 3rd-order terms, the fitted function deviates no-
tably at several points and increasingly so towards the endpoints of the interval of
engine-speed values. With the restriction in computational power, more advanced
functions are not considered and as for equation (3.2), it is deemed inadequate. It is
instead replaced by a lookup table, which has an associated time complexity of O(1)
and can easily represent non-linear behaviour in the fuel flow map. The trade-off
is instead that analytical approaches are obstructed. Based on this note, equation
(3.2) is replaced by the mapping
τˆ (ω ,u ) = map (ω ,u ) (3.3)
e e f τ e f
Similarly, when measuring fuel consumption the resulting data is structured as a
discrete map. Figure 3.3 shows a typical fuel flow map as a function of engine
speed. The map is generated by measurements of the fuel consumption at specific
steady states with constant engine speed and torque. The map in figure 3.3 is
upsampled by cubic interpolation between these steady-state measurements. The
plot clearly visualises the maximum fuel flow for the different engine speeds.
12
|
Chalmers University of Technology
|
3. Heavy-duty vehicle prediction model
1
0.8
0.6
0.4
0.2
Measured data
Fitted function
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Fuel flow
Figure 3.2: Shifted and normalised data from fitting the coefficients of equation
(3.2) to the measured data. Each line colour corresponds to constant engine speed
(increasing from up/left to down/right).
More important for the design of the algorithm in this thesis is the projection of
the fuel-flow map onto the fuelling-torque plane. This projection is presented in
figure 3.4. Two lines have been superimposed on the graph. Line A represents the
maximum engine torque and line B represents the torque when the fuelling is zero
and the engine thus is completely dragged. Evidently, the range of available torque
output from the engine is a varying function in engine speed. As will be described
in greater detail in the following chapters, the output from the algorithm is the
recommended torque request, and the dynamic range must be handled somehow.
This problem of varying torque range is addressed by letting the algorithm request
anytorque,butsimplypullinganyoutliersbackinsidethevalidintervalatevaluation
time.
3.1.1 Total fuel consumption
As the vehicle accelerates or decelerates, the engine speed changes. However, as the
sampling interval is traversed in approximately one second, this change in engine
speed is rather small over a single segment. Given this, the predicted fuel consump-
tion is computed based on the mean value of the engine speed at the start and end
of the segment to reduce the number of computations needed, provided that no gear
shift occurs.
Per definition one has
∆s ∆s
v¯ = ⇔ t = , t > 0,
t v¯
13
euqroT
|
Chalmers University of Technology
|
3. Heavy-duty vehicle prediction model
1
0.8
0.6
0.4
0.2
0
-0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Engine speed
Figure 3.3: The fuelling as a function of engine speed, for a family of curves with
constant torque. This scatter plot is a projection of figure 3.1 onto the fuelling-speed
plane. Note that the data have been intentionally corrupted.
where ∆s is the distance travelled in time t.
With constant acceleration, a, it follows that
v +v
f 0
v¯ = ,
2
which is the mean value of the initial and final speed of the truck.
Thus, the time needed to travel over a segment of length ∆s is
∆s
t˜= .
(v +v )/2
f 0
Assuming nearly constant acceleration, t˜is a good approximation of the time taken
to travel a distance ∆s.
Assuming that the fuel flow over each discrete segment may be modelled as constant
and denoting the fuel flow at segment k by m˙ , the total fuel consumed when
k
travelling over N intervals, each of length ∆s, becomes
N
m = X m˙ t˜ , [g]. (3.4)
f k k
k=1
It is convenient to have a way of relating the fuel consumption in mass to the
contained energy, since it then can be compared to the kinetic energy of the vehicle
and the useful energy output or absorbed by its engine and brakes. This is done by
14
gnilleuF
|
Chalmers University of Technology
|
3. Heavy-duty vehicle prediction model
1
A
0.8
0.6
0.4
0.2
0
-0.2
B
-0.4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Engine speed
Figure 3.4: Interpolated scatter plot of the torque as a function of engine speed.
Eachpointrepresentsaconstantfuelflow. Thelinelabelled’A’marksthemaximum
torque and the line labelled ’B’ marks the torque when the engine is completely
dragged (i.e. when the fuel flow is zero). The lines are very jagged due to the heavy
corruption of the data to ensure company secrecy.
converting the energy content of consumed fuel to Joules. The value for the energy
content used in this thesis is c = 4.8·107J/kg. The energy equivalent to the mass
f
fuel consumption is
N
E = c X m˙ t˜ , [J]. (3.5)
f f k k
k=1
3.2 Longitudinal dynamics
To capture the complete characteristics of a vehicle, it must be considered in all
three dimensions. However, under the assumption that the road is well-behaved
(smooth curves etc.), which in general is the case for highway-driving, the problem
can be reduced to only encompass the longitudinal dimension.
The longitudinal component, a, of the instantaneous acceleration of a HDV is
1
a = (F −F −F −F ), (3.6)
w r d g
m
with m being the mass of the vehicle, F the longitudinal force from the ground
w
acting on the wheels (i.e. propulsion and braking), F the air drag, F the longi-
d g
tudinal gravitational component, and F the rolling resistance. In fact, the rolling
r
15
euqroT
|
Chalmers University of Technology
|
3. Heavy-duty vehicle prediction model
resistance should be modelled as a torque if tire slippage is to be taken into con-
sideration. Since it is assumed that the tires do not slip, the rolling resistance is in
spite the previous remark modelled as a force in order to maintain consistency with
the referenced theory.
A frequently used model for air drag is
ρ
F = C A v2, (3.7)
d d v
2
where ρ is the density of air, C and A are the drag coefficient and frontal area of
d v
the vehicle, respectively, and v is the relative speed between the vehicle and the air
[29].
The rolling resistance is given by
F = C F , (3.8)
r r N
where C is the coefficient of rolling resistance and F is the normal force acting on
r N
the wheel under consideration [30]. For a truck travelling on a road of slope α(s),
where s is the distance from some reference point, the normal force is given by
F = mgcosα(s). (3.9)
N
The rolling resistance is highly dependent on various factors such as tire pressure,
tiremake, temperatureandofcoursetheroadsurfaceitself. Additionally, therolling
resistance is speed dependent; a dependence proposed in [31] to be on the form
C = C +C v2. (3.10)
r r,1 r,2
C and C are constants related to the tire. In practice, C is typically many
r,1 r,2 r,2
orders of magnitude smaller than C and can be both positive and negative, and
r,1
can thus generally be dropped completely from the above equation. However, for
completeness, it is kept throughout the calculations below.
When travelling on a road of slope α, the gravitational contribution to the longitu-
dinal force is
F = mgsinα(s). (3.11)
g
Inserting equations (3.7) - (3.11) in (3.6) then yields
ρ
ma = F − C A v2 −mgsinα(s)−(C +C v2)mgcosα(s), (3.12)
w d v r,1 r,2
2
assuming that the truck is travelling forward at speed v > 0. A summary of the
forces acting on the truck is given in table 3.2. The three rightmost terms are
16
|
Chalmers University of Technology
|
3. Heavy-duty vehicle prediction model
straightforward since they depend only on the speed, v, and position, s,1 of the
vehicle. The force that the vehicle exerts on the road to propel itself, however,
depends on both the state and characteristics of the drivetrain. A simplified version
of this dependence is presented in Section 3.3.
Table 3.2: Summary of forces
Force Designation Equation
Gravitational F mgsinα
g
Normal F mgcosα
N
Rolling resitance F (C +C v2)F
r r,1 r,2 N
Air drag F 1ρC A v2
d 2 d v
Propelling force F See Section 3.4
w
3.3 Vehicle motion
According to Newton’s second law, the rotational acceleration of the engine is given
by
J ω˙ = τ −τ , (3.13)
e e e out
where τ is the instantaneous torque generated by the engine, J is the moment of
e e
inertia of the engine and τ is the torque supplied to the clutch or torque converter.
out
The gear ratio separating the engine and the wheels consists of the gearbox trans-
mission ratio, i , and the final drive ratio, i . The total transmission ratio depends
g f
on the gear, G, and is given by
i(G) = i i . (3.14)
g f
Therelationbetweentheenginespeedandtherotationalspeedofthedrivingwheels,
ω , is
w
ω
e
ω = . (3.15)
w
i(G)
The torque transferred to the clutch, T , decreases gradually due to energy losses
out
as it is transferred through the driveline components. This decrease is modelled by
an efficiency, η. The effective torque appearing at the driving wheels is thus
τ = η ·i(G)·τ . (3.16)
e,w out
The governing equation for the propulsion is thus
1henceforth the dependence on distance is assumed self-evident and thus omitted for brevity.
17
|
Chalmers University of Technology
|
3. Heavy-duty vehicle prediction model
J ω˙ = τ −F R −τ = ηi(G)τ −F R , (3.17)
d w e,w w w b out w w
where τ is the brake torque, R is the wheel radius and J is the inertia of the drive
b w d
train and wheels together.
Under the condition of no slip, the acceleration of the truck is related to the angular
acceleration of the wheel according to
a = R ω˙ . (3.18)
w w
As explicitly done in appendix A, combining equations (3.13) - (3.18) and solving
for a yields
R
w
a = [i(G)ητ −τ −R (F +F +F )] (3.19)
J +mR2 +J ηi2(G) e b w d r g
d w e
3.4 Simplified prediction model
The above model is indeed a simplification of the complex workings of an engine and
driveline, butstillitcontainspartsthatareveryspecifictowhatengineanddriveline
components the vehicle is endowed with. In a simplified model, a proportion η of
s
the torque from the engine is transferred to the wheels, where η represents the
s
internal losses in the driveline components and moments of inertia of the powertrain
constituents. In reality, a constant efficiency η cannot replace the characteristics
s
of equation (3.19), but recalling that only highways and rural roads are considered,
the vehicle will operate in a narrow(er) operating space which increases the validity
of this assumption. The effect of the assumption is that the net force acting on the
wheels becomes
η τ i(G) τ
s e b
F = − . (3.20)
w
R R
w w
Equation (3.12) then becomes
η τ i(G) τ ρ
ma = s e − b − C A v2 −mgsinα−(C +C v2)mgcosα. (3.21)
d v r,1 r,2
R R 2
w w
The calculations below are simplified by the substitution T = mv2/2, where T then
is the kinetic energy of the truck. Furthermore, since the road is discretised into
segments of length ∆s, typically in the vicinity of 20m, and τ , τ , α and i(G) are
e b
assumed constant over each interval, it is possible to collect (piecewise) constant
terms in (3.21) according to
ma = c −c T, (3.22)
2 1
18
|
Chalmers University of Technology
|
3. Heavy-duty vehicle prediction model
where
ρC A ητ i(G) τ
d v e b
c = +2C gcosα, c = − −C mgcosα−mgsinα.
1 r,2 2 r,1
m R R
w w
A result of basic dynamics is the relation
v ·dv = a·ds.
Under the substitution T = mv2/2, this turns into
dT = ma·ds.
Together with equation (3.22) this yields
dT = (c −c T)ds.
2 1
Solving this separable differential equation results in
|T − c2|
c1 = e−c1∆s.
|T − c2|
0 c1
In the special case of T = c /c , the resulting force is zero, which means that T will
0 2 1
not change (i.e. T = T ). If T < c /c , there is initially a resultant force propelling
0 0 2 1
the vehicle. T can only approach the equilibrium T = c /c from below, but never
2 1
exceed it. In the opposite case, T > c /c , T can only approach the equilibrium
0 2 1
from above. This observation indicates that the sign of the expressions within the
bars will always have the same sign, and it follows that
c (τ ,τ ,s) c (τ ,τ ,s)
T = (T − 2 e b )e−c1(s)∆s + 2 e b , (3.23)
0
c (s) c (s)
1 1
where the variables’ dependencies have been re-included for clarity.
Recall that the above result was derived under the assumption that the truck was
travelling forward. When a truck travels on a highway or rural road under normal
conditions and with the cruise control active, this is a valid assumption. However,
the exclusion of the sign-dependence of the rolling resistance on the vehicle speed
causes the rolling resistance to appear as a force always acting in the backward
direction. If the HDV comes to a halt between two sampling points, equation (3.23)
will become negative, which is physically impossible. Thus, it must appear natural
to the algorithm to not assume that the vehicle will in fact reach the end of every
segment, in which case the solution should be discarded as it is deficient.
19
|
Chalmers University of Technology
|
4
Model predictive control
Model Predictive Control (MPC) is, as the name suggests, an advanced control
method where predictions of future states are made based on a model of the system.
MPC is not an algorithm itself, but an umbrella term for control strategies that seek
to optimise a process by finding the control signal sequence that minimises a cost
function. Since it was first proposed as a control strategy already in the late 1970’s,
MPC has evolved and is today applied in a variety of control situations. MPC is
synonymously termed Receding Horizon Control, which stems from the fact that, at
each sampling point, a prediction about a finite future is made. Thus, the horizon of
the prediction is pushed forward a step ∆s at every sampling point, where ∆s is the
sampling interval. The total look-ahead is therefore S = N∆s, where N is referred
to as the prediction horizon. In the prediction of signals and states at sampling
point q ∈ N0, an MPC controller can (but is not obliged to) take into consideration
all states and signals preceding that point [32]. A typical MPC problem, which is
also the form used in this thesis, could take the form
N+q−1
argmin X f(x,u,r), q ∈ N0
u(x) k=q
s.t.
(4.1)
u(k) ∈ U ∀k,
x(k) ∈ X ∀k,
x(k +1) = f (x,u)
s
with
f(x,u,r) = Cost function
f (x,u) = System model. Gives the next state, given previous states and signals
s
r(k) = Reference signal(s) over the kth interval
u(k) = Control signal(s) over the kth interval
x(k) = System state at the kth sampling point
U = Set of possible signals (dim(U) = {number of signals})
X = Set of possible states (dim(X) = {number of state variables})
21
|
Chalmers University of Technology
|
4. Model predictive control
In terms of the controller developed in this thesis it follows that x = [v,s,G] and
u = [τ ,τ ]. As regards the computation of the next state, x(k + 1), the QP-
e b
solver uses the rather crude forward Euler method due to its associated simplicity,
low computational cost and the limiting mathematical requirements put on the
formulation in order to turn it into a QP-problem, but also due to the fact that this
method is indeed still widely used today. The use of the forward Euler method is
furtherjustifiedbythefactthatthevehiclemodelusedbytheQP-solverissimplified
compared to the prediction model used by the GA. Increasing the accuracy of the
QP-solver, and thus generally the computational complexity, will not necessarily
result in appreciable gain. Compared to the QP-controller, the GA puts much less
emphasis on the mathematical formulation, enabling it to employ more advanced
methods to predict the value of x(k +1). As a result, the forward Euler method is
in this specific case replaced by the analytical expression in equation (3.23).
Different solvers have different performance with respect to various parameters, but
typically must the cost function not be too complex for the problem to be tractable
in terms of complexity, memory consumption and computational effort [33].
Algorithmically, the concept of MPC can be summarised as in algorithm 4.1.
Algorithm 4.1 Simple MPC algorithm
1: p ← FormulateProblem() . On the form required by solver
2: q ← q
0
3: while True do
4: x(q) ← MeasureState()
5: u ← Solve(p, x, u) . Returns signals for next N steps
temp
6: u(q) = u (0) . 0-indexing
temp
7: Send u(q) to the system
8: q ← q +∆q
9: end while
As can be seen from algorithm 4.1, when the solution to the minimisation problem
is found, only the very first element of the proposed sequence of control signals is
actuallysenttothesystem. Apotentialadvantageofthisapproachisthatifthestate
ofthesystemcanbequicklyandaccuratelydeterminedandthealgorithmcompletes
sufficiently fast, the errors of a simplified system model will not directly affect what
control signals will be proposed in the future as the algorithm continuously updates
the state estimation and predictions at each sampling. On the other hand, if the
accuracy and speed of predicting states is better than measuring them at each
sampling point, or if the algorithm is very slow compared to the system dynamics,
thenthealgorithmcouldbeadjustedtoacceptagreaterpartoftheproposedsignals.
However, lots of research has been put into developing algorithms to improve the
solution speed for MPC controllers in systems with fast dynamics (see for example
[34, 35, 36]), which has resulted in a very wide range of optimisation methods for
MPC, both addressing problems with fast dynamics, limited computational power
and/or complicated system models.
22
|
Chalmers University of Technology
|
4. Model predictive control
4.1 Minimising engine energy output by model
predictive control
The engine efficiency is a function of the working point. As a result, minimising
energy output of the engine is not equivalent to minimising fuel consumption. How-
ever, for normal driving, there is a correlation between fuel consumption and engine
energy output, and using either one as the cost function will lead to a solution that
is at or close to the minimum in fuel consumption.
As a discretised problem, the control actions are assumed constant over an interval
∆s. It then follows that the energy output from the engine over the kth interval is
τ i i
e,k g,k f
E = ∆s. (4.2)
e,k
R
w
Similarly, the brake energy is
τ
b,k
E = ∆s. (4.3)
b,k
R
w
The core of the problem is the minimisation of the cost function
N N
X X
J = E −c T (4.4)
e,k T k
k=1 k=1
s.t.
1 1
mv2 ≤ T ≤ mv2 ,
2 min k 2 max
0 ≤ E ≤ E ,
e,k e,max
0 ≤ E ≤ E ,
b,k b,max
T = T +E −E −E ,
k+1 k e,k b,k env,k
where E denotes the environmental forces (i.e. gravity, roll resistance, and air
env
drag). The inclusion of kinetic energy term is explained by the fact that the optimal
strategy when only considering engine output energy is simply to give no gas at
all, which obviously conflicts with the desire of the driver to maintain speed and
arrive at the destination. Furthermore, energy can indeed be absorbed by the engine
by letting it be dragged. However, while less or no fuel at all is consumed while
dragging the engine, letting the engine energy output be negative leads to solutions
wheretheenginebrakeisusedininappropriatesituations, whichexplainsthesecond
constraint. This may be better understood by noting that applying the engine brake
willinfactnotrecoverenergy,butsimplyavoidconsumingfuel. Therefore,theterms
in the first sum of equation (4.4) should not be allowed to decrease the value of the
cost function by being negative.
The above problem formulation may be readily stated on the standard form of a
linear programming problem,
23
|
Chalmers University of Technology
|
4. Model predictive control
(cid:124)
argminp y
y
s.t.
Ay = b
Cy ≤ d,
where y is the vector of variables, p, b and, d are known vectors and A and C are
known matrices.
As pointed out in [37], when employing the cost function in (4.4) the vehicle speed
usually tends towards the edges of the allowed speed range in static driving (i.e.
constant slope). This is an undesired behaviour, since under static conditions the
cruise control system should track the reference speed provided by the driver. An-
other essential factor to take into account is the driver comfort, which would be
compromised by excessive changes in engine torque. In addition to driver comfort,
smooth driving reduces mechanical wear as well as fuel consumption [37].
A natural way to include these factors are to penalise deviations from the reference
speed as well as changes in torque, thus introducing the following costs in the cost
function:
N (cid:18) 1 (cid:19)2 N
c X T − mv2 +c X (E −E )2,
T k 2 d s e,k e,k−1
k=1 k=1
yielding the final cost function
N N (cid:18) 1 (cid:19)2 N
J = X E +c X T − mv2 +c X (E −E )2, (4.5)
e,k T k 2 d,k s e,k e,k−1
k=1 k=1 k=1
where v is the desired speed at segment k, and c and c are non-negative param-
d,k T s
eters defining the importance of tracking the reference speed and smooth driving,
respectively.
With the new additions, the problem turns into a quadratic programming (QP)
problem, whose general form is
1
(cid:124) (cid:124)
argmin( )y Hy+p y (4.6)
2
y
s.t.
Ay = b
Cy ≤ d,
24
|
Chalmers University of Technology
|
4. Model predictive control
where H is a known matrix.
Rewritten on this form, the stated MPC problem becomes a convex QP problem
[37]. Both LPs and QPs have been thoroughly studied and hence there are many
robust high-speed solvers concerned with the task of solving these kinds of problems.
As reported in [38] the solution to the current problem represented on the form (4.6)
may be found in less than 1/5 of a millisecond 1.
4.1.1 Constant-speed correction
In steady state, the third term in equation (4.5) is identically zero, making term 1
and 2 the only competing terms. Lower speed requires less (propulsion) energy from
the engine since the air drag and rolling resistance decreases, although the latter
only decreases very marginally. At the same time, as the speed is lowered below
the set speed, the second term grows as a consequence of the square. In simulations
conducted both in this thesis and in [38] it is observed that close to the set speed,
v , the magintude of the derivative of the second term is greater than that of the
d
first term, causing the steady-state speed to be slightly below the set speed. The
reason for this is formally clarified by a steady-state analysis. In steady state, for
which α = 0 is assumed, the engine only has to balance the rolling resistance and
air drag. Thus,
2C T T
r,2
E = F ∆s+F ∆s = (C + )mgcos(0)∆s+ρCdA ∆s. (4.7)
e r d r,1 v
m m
Furthermore, the cost function in (4.5) reduces to
J = NE +c N(T −T )2
steady e T d
2C T T (4.8)
= N∆s((C + r,2 )mg +ρC A )+c N(T2 −2TT +T2),
r,1 m d v m T d d
where T and T are the kinetic energies corresponding to the desired speed and the
d
corrected desired speed, respectively.
Differentiating equation (4.8) with respect to kinetic energy yields
∂J NρC A
steady d v
= 2NC g∆s+ ∆s+2Nc T −2Nc T . (4.9)
r,2 T T d
∂T m
To ensure that the steady-state speed is not different from the desired speed, we
require that the minimum of the reduced cost function in equation (4.8) coincides
with the steady state. Thus, it is required that
∂J
steady
= 0,
∂T
1Theresultisbasedonadiscretisationof∆s=25mandN =20. Thesolver(custom-generated
by CVXGEN) was run on a computer equipped with Intel Core i5 (2.67 GHz).
25
|
Chalmers University of Technology
|
5
Genetic algorithms
Genetic Algorithms constitute a subgroup of evolutionary algorithms. EA is a col-
lection term for a family of stochastic optimisation algorithms. They are termed
evolutionary due to their property of resembling the evolution found in nature. The
various kinds of EAs are based on different evolutionary concepts, and in the case
of genetic algorithms it is, of course, genes that serve as main inspiration. However,
it should be emphasised that the actual biological process that serves as inspiration
for genetic algorithms is many times more complex than the resulting optimisation
method [39].
5.1 The biological process in short
Evolution is the continuous development of living organisms over time. The process
is very slow and the final result of evolution is an accumulation of the changes
up the branches of ancestors. The evolutionary progress thus relies on changes
to persist through generations. To that end, the information must somehow be
stored. Additionally, it must also be passed on to the offspring. More specifically, in
Darwinian theory of evolution one is talking about a heritage in behavioural and/or
physical traits [40]. In nature, this is realised by the genome - the complete DNA-set
of an organism. The units of the DNA that codes for a specific protein or set of
proteins are called genes. Focusing on the human species, there are between 20,000
and 25,000 genes amassed in 23 chromosome pairs.
The information that is stored in the DNA is not directly accessible by the part
of the human cell that builds the proteins from the instructions contained in the
genes. That is, the useful information is encoded and must be decoded before it can
be used. In the cell, the decoding is performed by an enzyme, RNA polymerase,
in a process that outputs the messenger ribonucleic acid (mRNA). The information
that is transcribed in the mRNA-molecule is then used in the ribosomes so as to
synthesise the proteins that in turn form the individual [41].
This is only a very brief description of the biology behind the synthesis of proteins
from DNA information, but it suffices for the purpose of developing the basics of
GAs. In addition to the theory concerned with the workings of biological processes,
other scientific theories are often used as inspiration, such as the Mendelian theory
of inheritance, but also non-biological ideas such as simulated annealing or, more
27
|
Chalmers University of Technology
|
5. Genetic algorithms
generally, statistical physics.
5.2 Algorithm design
There are many ways to take inspiration from genetics when building an algorithm.
In this section, the fundamental building blocks of a GA are presented, upon which
the design choices of this thesis are based. In addition to this approach of bringing
forth a part of the theory underpinning GAs, the choices made in designing the
controller algorithm are presented.
5.2.1 Constituents
The main part of a genetic algorithm is the genes. Like in the biological case, the
genes hold the smallest parts of (useful) information. Some authors like to define a
geneasthesmallestconstituentofachromosome. Inthatcase,theinternalstructure
of the gene is very simple; each gene may only hold a single unit (e.g. a number,
an operator, or an object). In the binary case, a gene is then the equivalent of a bit
as defined in the computer context. All by themselves they would not convey much
information, but grouped into chromosomes or parts of a chromosome they hold
useful information that can be decoded and interpreted in the system or process
to be optimised. Typically, in a multivariate function optimisation problem, each
variable could correspond to some contiguous fixed-length sequence of genes in the
chromosome. Thus, a problem of n variables where each variable is represented by
m genes would then form a chromosome consisting of n·m genes. Although this
g g
definitionofageneasthesmallestblockofachromosomeisconvenientinsomecases,
it fails to capture the information about what is the smallest structure needed to
representusefulinformation. Forexample, inthemultivariateoptimisationproblem,
it is apparently possible to represent each variable as a given number of elements
in the chromosome, why it appears natural to define a gene such that there is a
one-to-one correspondence between the variables and the genes. The trade-off is
evidently that when using the latter definition the internal structure of the gene
must be provided to fully specify it. In this thesis, the latter definition is used
unless explicitly otherwise stated.
Unlike in human beings, a single chromosome in the GA contains all the information
about the individual and the terms ’chromosome’ and ’individual’ are therefore used
interchangeably for simplicity. An illustration of a shorter binary chromosome is
presented in figure 5.1.
The GA considered here employs multiple individuals which are then collectively
referred to as a population. As will be clarified as the operators are presented, em-
ploying multiple chromosomes is a prerequisite for the algorithm, but also does this
open up for diversity in the population. In this context, diversity implies exploration
of the search space. Exploration means that the algorithm more efficiently sweeps
the search space, which in turn improves the odds of finding the global optimum.
28
|
Chalmers University of Technology
|
5. Genetic algorithms
The overall structure of these constituents is illustrated in figure 5.2.
Figure 5.1: Illustration of a binary chromosome. The chromosome consists of four
genes à four elements, with each element holding either a ’1’ or a ’0’.
Gene
Chromosome
Population
Figure 5.2: The internal structure of each individual in the population. At the
lowest level there is a gene containing a number of elements that each can store an
object. What kind of object is stored depends on the encoding.
5.2.2 Operators
There are many different operators that can be included in a GA. In fact, one
of the difficulties in optimisation using GAs is the wide range of parameters and
operators to choose from. Due to this fact, in order to successfully apply this family
of algorithms it takes some thought to reduce the computational effort needed to
arrive at the solution as well as improve the odds of arriving at the global optimum
within the allocated time. A downside to this type of algorithms is thus that they
generally do not carry over between different optimisation problems without being
modified. However, the generality is simply traded for a higher level of adaptivity
in the case when the algorithm is applied to the problem(s) it is designed for.
29
|
Chalmers University of Technology
|
5. Genetic algorithms
5.2.2.1 Initiation of population
Inthemostbasiccase, apopulationofsizem consistingofchromosomesoflengthn
p
is initiated by generating m strings with n random elements each. The distribution
p
used to generate the population can be chosen in various ways, based on heuristic
or mathematical ideas about the location of the optimum in the search space. Also,
there is room for hybridisation (i.e. mixing optimisation methods). Given the
current best solution as found by a different method, it could for example be given a
spot in the initial population while the rest of the population is randomly generated.
Inthecontextofhybridmethods,theconverseisalsotrue; thebestsolution,asfound
by a GA, could be fed to a mathematical solver that might have trouble converging
to the global optimum unless the initial point is sufficiently close.
As genetic algorithms mainly are inspired by both the Darwinian theory of evolu-
tion and the Mendelian concept of propagation and mixing of genetic material, the
findings in [42] that the initial population and thus the initial genetic content of the
population strongly influences the performance of the algorithm are indeed intuitive.
As pointed out in [43], completely random initialisation does not guarantee a spread
of the individuals in the solution space. In the extreme case the individuals may all
be initialised in a small region, depriving the population of initial diversity. A state
of low diversity is not inescapable as the algorithm family contains many stochastic
operators, but typically the loss of initial diversity decreases the chances of finding
the global optimum within the allocated time interval.
The problem of initial diversity is actively assessed in the algorithm developed here.
As presented in [44], this may for example be done by computing the generalised
Hamming distance between the individuals. However, this is both inconvenient
and increases the computational complexity, why the initialisation in this thesis is
done in a process simplified so as to decrease the cpu requirements. At the core, the
initialisation operator relies on the assumption that the initial solver gives a solution
that is not too far from the global optimum with respect to the genetic algorithm.
The validity of this assumption is of course highly dependent on how different the
solvers and utilised vehicle models are. As will be seen, this assumption is indeed
justified by the results.
With this assumption the required initial diversity may be reduced since the main
traits of the optimal solution with respect to the second solver are comparable
to those generated by the pre-solver. The implication is that components of a
chromosome that are very different compared to the corresponding components of
thepre-solversolutionarelikelytobeofpoorquality. Therefore,insteadofrandomly
initialising the population, the population is initialised by generating a complete
population consisting solely of copies of the warm start solutions. At least one
of each warm start solution is kept unaltered, while the rest undergoes the same
mutationprocessasthatusedinthemainloopofthealgorithm. However, toimpose
appreciable diversity, the mutation rate is significantly higher in the initialisation
phase than in the main loop.
30
|
Chalmers University of Technology
|
5. Genetic algorithms
5.2.2.2 Encoding and decoding
Encoding is the process of representing the search space of the optimisation problem
in the coding space. Bringing this back to the biological domain, the search space
may be thought of as the phenotype and the coding space as the genotype1.
Between these two spaces, the encoding and decoding procedures act as mappings.
Depending on the encoding scheme, this mapping between search space and coding
spaceisnotnecessarilybijective. Anexampleofamappingthatmaynotbebijective
is tree encoding, as commonly used in genetic programming. Non-redundancy is
generally desirable in GAs and this non-bijectivity breaks this rule-of-thumb, but
the use of these kinds of mappings have been found fruitful in certain applications
and are therefore still used despite this downside [46].
5.2.2.2.1 Binary encoding
One of the most widely employed encoding is the binary encoding. Binary encoding
was presented in figure 5.1. If a binary encoded gene consists of n elements, it
is capable of representing 2n different values. In the case of binary encoding, the
search space must be bounded somehow. In the continuous case it means that there
is an upper and lower bound for each variable, while a discrete problem requires a
boundedset[46]. Givenarange, [a,b]andusingbinaryencoding, thisrangecanonly
be divided into 2n −1 intervals. The average resolution offered by binary encoding
is then (b−a)/(2n −1).
Thedecodingfunctioncanbechosenarbitrarily. Forexample, inspiredbythebinary
system, abinarygeneg (i = 0,...,n−1)representingtherange[a,b]maybedecoded
i
according to
Pn−12ig
x = a+ i=0 i ·(b−a). (5.1)
2n −1
Evidently, theresolutioncaneasilybecontrolledbychoosingthelengthofthegenes.
Some advantages of this approach are clear already at this point (e.g. exact repre-
sentation of integers, easy to control resolution etc.), but it also opens up for the
use of Gray Codes, amongst others [39]. However, there are also obvious downsides
to binary encoding, one of which is its inherent property of encoding error, which
may be reduced on the expense of increased chromosome length and thus increased
search space dimensions and computational complexity.
5.2.2.2.2 Value encoding
Just like the operators, there is a vast amount of encoding schemes that can be used.
In fact, the encoding schemes may be infinitely customised to suit the problem. This
1Phenotype is the visible traits of the individuals as caused by the genetic information, also
referred to as the genotype [45].
31
|
Chalmers University of Technology
|
5. Genetic algorithms
does rather well illustrate the often needed and witnessed ingenuity in contexts of
alternative and adaptive algorithms. Although very many encoding schemes fall
within the field of binary encoding, these schemes are not universally applicable
and, even if they are, they may not be the best suited. Suitable encoding is crucial
for the success of genetic algorithms [47].
A competitor to binary encoding that manages to overcome some of the associ-
ated shortcomings is value encoding. Instead of encoding the information in binary
format and employing mappings between the search space and coding space, this
method involves representing something connected to the optimisation problem in
real values. The most obvious reason to use value encoding is in cases where it is
not possible to represent the problem in binary format or where the encoding error
associated with binary encoding becomes too large. However, in addition to these
fundamental reasons, Michalewicz found that the utilisation of value encoding is
making its way into the domain of genetic algorithm as the main findings in [48] are
that real-value encoding has the potential to outperform binary encoding.
Furthermore, it should be noted that value encoding does not mean that the genes
must hold a number, but it could be any object. What might be considered a
drawback of this method is that it often is necessary to tailor the operators to the
specific nature of the problem [46]. This affects the generality and the typical ease-
of-use, but there is a direct gain in computational speed as encoding and decoding
processes often are less demanding and in the extreme case, the search space is
directly represented in the coding space and no transformations are needed.
The algorithm is intended to control the engine torque output and with the aim
of making the computational footprint small in the developed application, value
encoding is used. As presented in figure 3.4, there are natural upper and lower
bounds on the engine torque output. The bounds are functions of engine speed
and are therefore difficult to directly include in the algorithm as the engine speed
is highly dependent on the previous control signals sent to the powertrain. This
problem is assessed by having the genes represent any torque-values but pulling
outliers back inside the valid interval at evaluation time.
5.2.2.3 Evaluation
The purpose of the evaluation is to assign a fitness value to each individual based on
theirphenotype. SinceGAsaredeeplyinspiredbynaturalselection, thefitnessvalue
of a solution is a very central part for the progression of the algorithm. The fitness
function thus has fundamental influence on the success of the algorithm. To achieve
a good result, the fitness function should assign high fitness values to individuals
with desirable traits while undesirable characteristics should be penalised. In view
of conventional mathematical optimisation methods, this process is the equivalence
of formulating the problem in mathematical terms. However, there is a fundamental
difference. Asmanymathematicalmodelsrelyonthatthemathematicalformulation
of the problem fulfills certain criteria, GAs do only put very loose restrictions on
the problem formulation. GAs do not even require the problem to be expressed
32
|
Chalmers University of Technology
|
5. Genetic algorithms
mathematically. The main point is that the evaluation could be performed in any
manneraslongasitenablesafitnessvalueorranktobeattributedtoeachindividual
[39, 46].
For the purposes of longitudinal control of an HDV, numerical models are readily
available and a mathematical formulation of the problem is indeed convenient. The
formulation may be formed in an infinite number of ways and on forms that can
be tailored to a specific problem. For the case at hand an approach of modest
model complexity is chosen. Behind this lies the reasoning that generality decreases
with increasing model complexity, that there is a correlation between simplicity and
robustness, and that the model should not be too expensive in terms of compu-
tational complexity. In addition to these remarks, a consequence of only making
small changes to the truck model used by standard solvers (e.g. quadratic program-
ming) is that it is easier to identify any improvements that can be attributed to the
developed solver.
With (at best) a correlation between engine energy output and fuel consumption,
thecostfunctiongivenbyequation(4.5)cannotbeusedtofindtheoptimalsequence
of control signals with respect to fuel flow. Noting that the energy contained in the
consumed fuel and the energy output are comparable and only differ by a factor
typically in the range 2-3 due to engine efficiency, equation (4.5) may be modified
according to
N N (cid:18) 1 (cid:19)2 N
J = δ X E +c X T − mv2 +c X (E −E )2, (5.2)
f fuel,k T k 2 d,k s e,k e,k−1
k=1 k=1 k=1
where E has been replaced by the fuel energy content, E along with a cor-
e,k fuel,k
rection factor δ to account for the engine efficiency.
f
Thus, while the genetic algorithm has the potential to employ very complex cost
functions, the extension of the cost function is in this case rather subtle. Impor-
tantly, however, it adds the ability to evaluate a proposed solution based on fuel
consumption instead of engine output energy.
5.2.2.4 Selection
The purpose of the selection process is to select a number of individuals from the
population and let them transfer their genes to the next generation. If sexual re-
production is used, individuals are typically selected pairwise and are then allowed
to mate. The general approach is presented in figure 5.3.
In the strict literal sense of survival of the fittest, the individual with the highest fit-
ness value would get to procreate. However, it is possible and, in general, preferred
to implement schemes that do not blindly select the best individuals but also con-
sider the individuals with lower fitness values. Thus, the fitness values may merely
be used as indicators or recommendations of specific individuals. To give an idea
of the magnitude of the influence of the fitness value, the term selection pressure is
introduced. High selection pressure implies strong reliance on fitness value, while
33
|
Chalmers University of Technology
|
5. Genetic algorithms
Selected
individuals
Population Operators
New population
Figure 5.3: From the population, a given number of individuals are selected in the
selection process. These individuals are passed on to the other operators and finally
placed in the new population.
low pressure indicates a more arbitrary selection with regards to fitness. The or-
thogonality of high and low selection pressure leads to different characteristics; high
pressure leads to faster convergence at the expense of the odds of finding the global
optimum. Low pressure, on the other hand, may lead to slower convergence, but it is
also associated with a better chance of finding the global optimum. Put differently:
high pressure promotes exploitation while low pressure promotes exploration [46].
5.2.2.4.1 Roulette wheel selection
A simple scheme for selection is the so called roulette wheel selection. The name is
derived from the casino game, but a more accurate name would maybe be wheel-of-
fortune selection after the American TV show. In the standard case, the probability
ofanindividualbeingselectedisproportionaltotheindividual’sfitness. Ifindividual
i (i = 1,2,...,m ) has fitness f , then the probability of individual j being selected
p i
is
f
j
p = , j = 1,2,...,m . (5.3)
j Pmp
f
p
i=1 i
Since probabilities must be non-negative, the method, as stated here, requires non-
negative fitness values.
To implement this version, the cumulative probability, θ , is used:
j
34
|
Chalmers University of Technology
|
5. Genetic algorithms
Pj f
θ = i=1 i , j = 1,2,...,m (5.4)
j Pmp
f
p
i=1 i
The selection is performed by drawing a random number, r ∈ [0, 1], and the selected
individual is the first one fulfilling
θ > r.
j
An example of this process is illustrated in figure 5.4.
9% 11%
2%
12% 6%
9%
7%
14%
17%
13%
Figure 5.4: Roulette wheel selection for a population consisting of 10 individuals.
If the fitness values are normalised, the illustrated case corresponds to r = 0.25.
Counted clockwise, the fourth individual is selected.
5.2.2.4.2 Tournament selection
Along with roulette wheel selection, tournament selection is the most widely em-
ployed selection operator [39]. While roulette wheel selection is inspired by the game
rather than nature, tournament selection is directly inspired by a selection process
in nature. In a natural tournament, there is always a risk of various factors leading
to the superior individual loosing and consequently allowing the inferior creature to
transfer its genes to the next generation. This cause of diversity and, algorithmically
speaking, exploration of the search space, is captured by the tournament selection
operator. In this scheme two or more individuals are randomly selected from the
population. Out of these individuals, the best one is selected with probability p.
The process is recursively applied until an individual has been selected or only a
single individual remains and thus is automatically selected.
35
|
Chalmers University of Technology
|
5. Genetic algorithms
5.2.2.4.3 Boltzmann selection
One the one hand, roulette wheel selection and tournament selection are based on
intuition and nature’s counterpart, respectively. On the other hand, they don’t take
into account the evolution of population over time and appropriately adjust the
selection (i.e. their selection pressure is constant). A well-known heuristic approach
to finding an optimum in a search space is to start out on a coarse scale and then
successively zoom in on the interesting regions. This is a phenomenon witnessed in
statistical physics rather than biology, and the work and ideas of Ludwig Boltzmann
within the field of statistical physics have been a major source of inspiration [49].
Thealgorithm isinspiredbythe annealing processof solids, whichinvolvesheatinga
metal to specific temperature for a specified amount of time and then slowly cooling
it in a controlled way [50]. Ideally, at the maximum temperature, the metal atoms
are randomly located in the liquid phase. If, additionally, the cooling is sufficiently
slow,theresultoftheannealingprocessisasolidinwhichtheparticleshavearranged
themselvesin thelow energyground states[51]. Thedirect connectionto thistheory
is the simulated annealing algorithm, but it also carries over to the selection process
of GAs [46]. In that case, equation (5.3), the probability of selecting individual
j (j = 1,2,...,m ) with fitness f in roulette selection, is replaced by
p j
efj/T0
p = , (5.5)
j Pmp efi/T0
i=1
where T0 is the equivalence of temperature in a annealing process [39].
Equation (5.5) is merely an example of a Boltzmann inspired selection scheme. In
[39] a second selection process derived from statistical physics is presented, but it
is based on tournament selection instead. Yet another approach to the same kind
of selection is found in [46]. The latter also proposes a logarithmically decreasing2
temperature:
n
T0 = T0(1−α)k, k = 1+100 gen ,
0 G
where T0 is the initial temperature, n is the current generation number, G is the
0 gen
maximum number of generations and α is control parameter in the interval [0,1].
Although these rule of thumbs exist, experimenting is generally required for good
results [39].
5.2.2.4.4 Stochastic universal sampling
In view of the performance of the genetic algorithm, [53] introduces three measures:
2A logarithmically decreasing function is a function whose value decreases to zero more slowly
than any nonzero polynomial [52]
36
|
Chalmers University of Technology
|
5. Genetic algorithms
• Bias - The absolute difference between the expected value3 and the actual
value. Optimal (zero) bias is thus achieved when the selection algorithm per-
fectly respects the expected value.
• spread - The range of number of times that an individual may be selected in
the selection process.
• efficiency - The complexity of the algorithm (e.g. time complexity).
In the situation of optimal bias and minimum spread, the actual value (number of
offspring) for individual i is thus restricted to the set
{be c, de e},
i i
where e is the expected value.
i
From this it follow that a selection algorithm should have minimal spread and zero
bias, and be efficient. An algorithm with these properties is stochastic universal
sampling (SUS) [54]. The efficiency is in the order of m , the population size.
p
Conceptually, the algorithm is very similar to roulette wheel selection. However,
instead of repeating the selection process m times, all m individuals are selected
p p
at once and not independently. The selection process starts by normalising the
fitness values to sum to 1. Next, a pointer is placed at random in the interval
[0,1/m ]. Subsequent pointers are then placed a distance 1/m apart, as illustrated
p p
in figure 5.5. Intuitively this may be thought of as placing a comb with equidistant
teeth in figure 5.5, where the position of the first tooth is chosen at random in the
interval [0,1/m ].
p
Figure 5.5: Illustration of stochastic universal sampling. The size of each segment
corresponds via some predefined rule to the fitness of the corresponding individual.
The total length of the segments is 1, and all pointers are therefore separated by an
interval equal to 1/m , where m in this case is 10. The first (leftmost) pointer was
p p
randomly selected in the interval [0,1/m ] and in this case it was placed at 0.0572.
p
In figure 5.5 two individuals are sampled twice. Programmatically, when these
individuals are extracted form the population as in figure 5.3 the simplest method
is to extract them in order and thus place any multiple samplings next to each
other. Depending on the implementation of the crossover process presented next,
this adjacency may lead to crossover between the very same individual. The result is
3Theexpectedvalueofanindividualisdefinedastheaveragenumberofoffspringthatitshould
receive.
37
|
Chalmers University of Technology
|
5. Genetic algorithms
that since crossover between identical chromosomes in many schemes simply clones
the parents, there is no net result. To avoid this situation, a shuffling algorithm is
applied to the selected population.
5.2.2.5 Fitness transformation
In the algorithm, the direct fitness value is computed according to equation (5.2).
As a result of having each gene code for the torque for a given road segment, if a
single gene is altered while keeping the rest fixed, all future states of the vehicle
are affected by this change. The result is that one ”bad” gene can cause the whole
chromosome to appear as a solution far from the optimum. This has the potential
to decrease the fitness value considerably, leading to a loss of valuable information.
In view of this issue, it is necessary to either lower the selection pressure directly or
transform the fitness in order not to lose good solutions disguised by a set of poor
genes.
Since SUS is used as sampling technique, and this method offers optimal bias and
minimal spread, it is straightforward to control the expected value, e , and more
i
importantly, the expected value of the elite.
Instead of normalising the fitness values as described in section 5.2.2.4.4, the fitness
values can be left unchanged and the pointer interval will then be of length
Pmp
f
i = i=1 i ,
p
N
s
where N denotes the number of individuals to be selected.
s
The expected value of individual j is then
f f
j j
e = = .
j
i
Pmp
f /N
p i=1 i s
A common way of transforming the fitness is to employ fitness ranking, which in its
basic form means that one, in a population of m individuals, assigns a fitness value
p
of m to the best individual, m − 1 to the next best and so on. However, when
p p
employing SUS and selecting as many individuals as there are in the population, it
follows that
m m2 m2 2m
e = p = p = p = p .
best Pmp
f /m
Pmp
i (m +1)m /2 m +1
i=1 i p i=1 p p p
Thus,
lim e = 2,
best
mp→∞
38
|
Chalmers University of Technology
|
5. Genetic algorithms
indicating that the best individual will copied two times into the new generation in
the limit as m tends to infinity.
p
To control the expected values, the algorithm instead employs two successive fitness
transformations. First it applies fitness ranking to effectively decrease the selection
pressure. To assess the problematic tendency of copying the best individual twice
into the next generation, the second transformation takes the form
fˆ = f1/7 .
j j,rank
For m = 50 and m = 100 with N = 50, this yields the graphs illustrated in
p p s
figure 5.6. As can be seen from the figure, this transformation guarantees that a bit
more than half of the best individuals are guaranteed to be selected (i.e. e ≥ 1)
j
if m = N = 50. Also, it should be noted that this approach does not completely
p s
prohibit the algorithm from carrying over two copies of the same individual to the
next generation as the expected value lies between 1 and 2, but it does decrease the
probability and in the obvious way it is possible to decrease this probability even
further by choosing a different function for the second fitness transformation.
1.2
m = 50, N = 50
1.1 p s
m = 100, N = 50
p s
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0 10 20 30 40 50 60 70 80 90 100
Rank
Figure 5.6: Expected value after transformation of fitness ranking values. The
solid line represents the case where equally many individuals as there are in the
population are to be selected, while the dashed line illustrates the case where only
halfoftheindividualsinthepopulationaretobeselected. Inbothcasestheabsolute
number of individuals to be selected is the same.
5.2.2.6 Optimal crossover and mutation rates
The mutation and crossover rates are parameters whose values significantly affects
the performance of a genetic algorithm [55]. Many articles are concerned with
finding the optimal value for these parameters, but in general the findings rarely
carry over between applications, making an algorithm relying on constant values
39
eulav
detcepxE
|
Chalmers University of Technology
|
5. Genetic algorithms
of these parameters fragile. A general conception is that there is no such thing as
optimal rate, and in an attempt to assess this fragility and circumvent the need to
explicitly set the rates the authors of [55] propose adaptive probabilities for both
crossover and mutation. The proposed models are
p =
c 1( ff mm aa xx −− hf f0 i), f0 ≥ hfi
(5.6)
c
c 2, f0 < hfi,
for crossover and
p =
c 3f(f mm aa xx −− hff i), f ≥ hfi
(5.7)
m
c 4, f < hfi,
for mutation. c , c , c and c denotes constants to be set, f and hfi denotes
1 2 3 4 max
the maximum and average fitness values of the present generation, respectively, f0
is the maximum fitness of the pair to cross and f is the fitness of the individual to
mutate.
A GA is a directed search algorithm and typically the parameters p and p reflect
m c
thetrade-offbetweenthedesiretohavethealgorithmbeingexplorativeorexploitive,
ideally in that temporal order (i.e. first explore and then prioritise exploitation). To
achieve this, a standard approach is to decrease the mutation and crossover rates
with time (i.e. generation number). The stochasticity of the algorithm, however,
makes the temporal development of the population unpredictable, which justifies
the inclusion of population dependent mutation and crossover rates as in (5.6) and
(5.7).
For both rates, there are default values for sub-average individuals. Now, focusing
on the crossover rate, p , it can be seen that for pairs where the best individual has
c
above-average fitness, the crossover rate decreases with increasing pair-wise maxi-
mum fitness and if the pair contains the best individual in the population, the rate
is zero. Similarly for the mutation rate, there is a default rate for sub-average indi-
viduals, while for above-average chromosomes the rate is different and modelled by
a decreasing function that goes to zero for the fittest one.
The zero-probability of the best individual being crossed prevents it from being
destroyed, which is acceptable but not a requirement. However, both crossover and
mutationratemustnotbeallowedtobezeroforasingleindividualasthiscouldlead
to exponential growth and consequently an imminent risk of premature convergence.
Based on this reasoning, the authors of [55] introduce a small default mutation rate
of 0.005, acting as a minimum mutation rate for all individuals.
5.2.2.7 Crossover
Depending on the genetic algorithm, different crossover schemes must be used. The
schemes that will be considered here are schemes where 2 parents give rise to 2
children and the chromosome length is preserved. Also, as the characteristics of the
40
|
Chalmers University of Technology
|
5. Genetic algorithms
crossover operation depends both on the encoding, fitness transformation, selection
of individuals to cross, and the crossover operator itself amongst other, the adap-
tive mutation rate introduced above is dismissed for a constant crossover rate of 1.
However, as the concept of competing generations is employed in the final selection
of the next generation and stochastic uniform sampling is used as the mechanism for
crossover selection, the effective crossover rate is less than 1. Furthermore, with the
inclusion of competing generations, all genetic material from the previous generation
is guaranteed to be present without any modifications when selecting individuals for
the next generation. An important point to underline is that although the mutation
rate is not adaptive, its characteristics are sought as a net effect in the design of the
algorithm.
In the development process many crossover operators were evaluated. The most
general forms of the evaluated operators are presented below, and in the final algo-
rithm flat crossover is used as it proved to be best suited with regards to how the
problem has been formulated in this thesis.
5.2.2.7.1 k-point crossover
The most fundamental crossover scheme meeting the requirements above is the k-
point crossover. Recalling that each chromosome consists of n genes, a chromosome
can be split at n−1 locations. The algorithm starts by drawing k unique random
integers representing the crossover points. The two parent chromosomes are then
split at these locations. Then every other segment is swapped, mixing the genes of
the two parents. In many applications k is set to 1 or 2 and as found in [56], when
compared to both uniform, flat and 2-point crossover, the 1-point crossover out-
performed the others in the job shop scheduling problem. The job shop scheduling
problem is clearly different from the problem of longitudinal vehicle control, but the
results in [56] indicate that by increasing the number of crossover points, valuable
schemas4 may be destroyed and consequently make the algorithm perform worse.
5.2.2.7.2 Uniform crossover
k-point crossover is applicable for many different encoding schemes but, for value
encoding, uniform or flat crossover are normally employed [57]. Much like k-point
crossover, uniform crossover performs crossover on two individuals by traversing
the chromosomes and swapping corresponding constituents between the individuals.
However,uniformcrossoverdiffersinthateachpairofcorrespondinggenesofthetwo
individuals are swapped with a certain probability. The result is that segments of
varying length are swapped between the chromosomes, but unlike k-point crossover
there is no predefined number of segments that are swapped. Instead the number of
swapped segments is in the range [0,n], where the cross ratio (i.e. the probability of
swapping two genes) can be used to bias the number of swapped segments in either
direction.
4A schema is a subsequence of a chromosome.
41
|
Chalmers University of Technology
|
5. Genetic algorithms
5.2.2.7.3 Flat crossover
Both k-point crossover and uniform crossover are underpinned by the theory of how
genes are passed on from parents to offspring in humans amongst others. Formally
speaking, the above crossover operators are concerned with the genotype of the par-
ents and the generated offspring. Recalling the parallel drawn between the genotype
and coding space, and phenotype and solution space, it can be said that the flat
crossover operator targets the phenotype in cases where no encoding is used. Flat
crossover can also be applied to encoded chromosomes, but in that case one should
instead talk about genotype superposition as it does not operate directly on the
phenotype.
In its simplest form, flat crossover generates the content, commonly referred to as
allele,ofgenenumberj in2children(c1andc2)from2parents(p1andp2)according
to
gc1 = r gp1 +(1−r )gp2,
j j j j j
gc2 = r gp2 +(1−r )gp1,
j j j j j
where r is a random number in the range [0,1].
j
5.2.2.8 Mutation
As for this thesis, the dynamic mutation probability is considered of relevance, but
even more so is the reasoning underpinning it. Specifically, the effective mutation ef-
fects should decrease as the evolution progresses. Evidently, since both the crossover
and mutation operation potentially changes the fitness of the individuals, the fitness
values must be updated more frequently at the expense of the time complexity of the
algorithm. In view of this, the adaptive rate in equation (5.7) is rejected in favour of
a combination of non-uniform mutation described next, and competing generations
described in section 5.2.2.10. On top of this, a constant mutation rate is used. The
rate is chosen in accordance with the optimal value derived in appendix B.
In spite of the remark earlier made how the stochasticity of the algorithm makes
predictions of the state of the population at a given temporal point, a mutation
operator based on temporal information is used. The operator, which is a modified
version of the non-uniform mutation operator, is given by
g +f(t,l ), R = 1,
g0 = i range (5.8)
i g
i
−f(t,l range), R = 0,
where l is the absolute value of the maximum range that a gene can creep away
range
from its current value in the mutation process and R is a random value drawn from
the set {0,1}. The function f() is defined as
f(t,l ) = l
(1−r(1−ngen)b).
(5.9)
range range G
42
|
Chalmers University of Technology
|
5. Genetic algorithms
In the above equation r is drawn from the standard uniform distribution, n de-
gen
notes the current generation number, G is the maximum number of generations, and
b is a parameter that has been set to 2 in this thesis.
In the search of optimal parameter values, the studies are often carried through with
the genetic algorithm as the only solver. Consequently, this requires the algorithm
to be well-tuned throughout the evolution process. With a vast search space and
an interest in keeping its computational footprint low the algorithm is not suited to
act as the only solver. Instead it is assumed that the GA will have access to a very
qualified first guess of the optimum. The major consequence is that the algorithm
should be exploitative rather than explorative. However, it should be noted that it
is fundamentally required that some tendencies of exploration are kept.
By reducing the need for random exploration to ”map out” the characteristics of
the search space, the trade-off between exploration and exploitation encoded in the
mutation rate may be biased to favour the exploitation. By letting l be some
range
small value, typically some low multiple of 10, the mutations are effectively creep
mutations drawn from an ever-narrowing distribution. An intuitive description of
the underlying idea is that the initial and assumed qualified guess of the optimal
control sequence is represented by a rubber band in the solution space. The purpose
of the mutation is to pull each part of the rubber band towards the points in search
space that will make the solution more optimal. Initially the algorithm can make
large adjustments, but as the algorithm progresses the purpose of the mutations
moves towards being to fine-tune the chromosomes.
5.2.2.9 Replacement
A general method of replacement is to replace the whole generation at once by delet-
ing the old population and letting the offspring take its place. Another method is
steady-state replacement, which involves replacing only a fraction of the population
in each evaluation cycle [39]. One advantage of steady-state replacement is that, as
often is the case in nature, the offspring is allowed to compete with the older gener-
ations. While the operators of a genetic algorithm rarely guarantee improvement of
the operands, a desirable effect of this kind of selection is that poor individuals are
giventhechanceofsecuringtheirplaceinthenextgenerationwhilegoodindividuals
from the previous generation may also get to live on. In this thesis, a replacement
operator somewhere in between these two is used. More specifically, the algorithm
uses full replacement but with inter-generation competition, as explained next.
5.2.2.10 Competing generations
Genetic algorithms are stochastic inherently cannot guarantee that the overall fit-
ness of a generation is equal or better than the previous generations. As for the
maximum fitness there is the elitism operator, but it does not care about the gen-
eration as a whole. In nature there is often an overlap between generations, making
it reasonable to introduce the concept of inter-generation competition. In terms
43
|
Chalmers University of Technology
|
5. Genetic algorithms
of genetic algorithms this means that in the final selection process of forming the
next generation, both the proposed new individuals and the previous generation are
allowed to compete. The net population to select individuals from is consequently
doubled in size, effectively turning the solid graph in figure 5.6 into the dashed line.
From this graph it should be noted that there is no longer a guarantee that even
the best individual will be selected, making it necessary to include elitism.
5.2.2.11 Elitism
Thus far the stochasticity of the algorithm has been heavily emphasised as a fun-
damental property. It is indeed one of the most fundamental properties of the
algorithm, but it also has the potential to disrupt the population in various ways.
One such way is that it may destroy the best individual. A safeguard is to always
keep the fittest individual in the population simply by ensuring that a copy of it
is always transferred to the next generation. It should be noted that whenever the
elitism operator is employed, there is a risk of the fittest individual taking over the
entirepopulationincasethecollectiveeffectoftheoperatorsistopromotethefittest
individual very strongly. If promoted too strongly, chances are that the fittest indi-
vidual is copied into the next generation multiple times, causing exponential growth.
5.2.3 The final algorithm
Through continuous reasoning and testing, the most suited operators of those pre-
sented above have been combined into the final algorithm. The main flow of the
algorithm is presented in figure 5.7.
Also, so as to intuitively illustrate the operation of a genetic algorithm, a simplified
case encompassing a 2-variable function optimisation is presented in figure 5.8.
44
|
Chalmers University of Technology
|
6
Hybrid algorithm
Since the gear cannot be controlled, it cannot be included in the optimisation prob-
lem on the form accepted by the above MPC algorithm. In general, the constraints
corresponding to the gear selection are hard to incorporate in mathematical opti-
misation algorithms. Therefore, if there is a way to mix the determinism of LP/QP
with the soft-computing advantages associated with GAs, an algorithm that is su-
perior1 to both algorithms alone can emerge.
6.1 Genetic algorithm with warm start
An important property of a real-time control algorithm is that it should always
provide a feasible solution within some specified time, should it not manage to find
the optimal one. While stochasticity is a fundamental property for the success
of GAs, it also makes the algorithm unpredictable in the sense that it does not
guarantee convergence within a given time window. The following sections are
aimed at presenting a way of evading this problem and how information contained
in previous solutions may be reused in order to improve the results without affecting
the computational load.
6.1.1 Pre-solving and non-deterioration
A straight-forward solution to the problem of failure to converge within a specified
time window is to apply a fast and deterministic solver to a simplified version of
the problem. In view of the formulation of an MPC problem, equation (4.1), the
cost function f() can be chosen arbitrarily. Similarly, the system model f () may
s
also be chosen arbitrarily, but of course the choice directly affects the quality of the
predictions made by the solver. A common way of simplifying a vehicle model is
throughlinearisationtechniques. Althoughmodelsimplificationsmaybecrucialand
have been successfully applied to various applications, it must be emphasised that
the exclusion of non-linearities in general will generate suboptimal control strategies
that, depending on the context, may or may not be acceptable [58]. In the context
of this thesis, the aim of this fast and deterministic solver is not to find the optimal
1Of course, superiority is highly context/application dependent!
47
|
Chalmers University of Technology
|
6. Hybrid algorithm
control strategy, but to output a solution resembling the optimal strategy, serving
as an initial guess for a second, more advanced solver.
In general, the stochasticity of the genetic algorithm introduces a risk of loosing
useful information as the evolution progresses and the carriers of that information
die out or the information is corrupted by the mutation and crossover operators.
Thus, unless deliberately handled, the maximum fitness in a population can display
a sudden drop. Previously explained, the elitism operator is a way to get around
this problem, ensuring that an unmodified version of the fittest individual is always
passed on to the next generation. Therefore, if elitism is employed, the maximum
fitness of the population is a non-decreasing function in generation number. That
is, even in the worst case scenario where the GA fails to find a solution with a
better fitness value, the control signal that will be sent to the system is optimal
with respect to the simplified formulation of the pre-solver, given that there exists
a feasible solution.
6.1.2 Reusing previous solution information
In addition to warm starting with a different solver, the algorithm can be extended
so that the most useful information emerging from previous evolutionary efforts
remains in the population. Importantly, this implies that only during the very
first iteration in a controller session a full evolution from the pre-solver solution to
the optimal one is guaranteed to be required. As for subsequent iterations, if no
assumption is made about how much the optimal solution changes between time
steps, the only thing that can be said is that at worst2 the algorithm will start over
from the pre-solver solution and be forced to carry out a full evolution again.
However, as the algorithm is intended to run continuously with a look-ahead horizon
of 50-100 steps, only a very small fraction of the road will change between adjacent
steps3. Also, the state of the vehicle will not change much from one time step to
the next during normal operation. Together these two observations implies that
in most cases the optimisation problems for two adjacent time steps will be very
similar, typically leading up to similar solutions for the problem at hand. This
reasoning points in the direction that if previously found solutions are reused, the
initial guesses of the GA could have potential to be very close to the actual optimal
solution. Consequently, the overall quality of solutions would improve over time,
but also would the algorithm converge more quickly on average, which brings the
algorithm closer to real-time operation under hardware restrictions. If the number
of iterations is kept constant, there is hence potential to successively increase the
quality of the solutions. Based on the above points, an illustration of the resulting
2worst refers to the maximum difference between the fitness value of the GA- and pre-solver
optimal solutions. It does not consider the distance between points in search space, given any
metric.
3In the algorithm developed in this thesis, a variable sample length is employed. However,
the way the sample length is chosen, it will not change significantly between adjacent time steps
during normal cruising. Thus, the difference in road topography data between adjacent steps will
in reality be only a few percent.
48
|
Chalmers University of Technology
|
7
Algorithm evaluation
As extensive effort has been put into investigating and benchmarking various cruise
controllers aimed at decreasing the fuel consumption of HDVs, there are lots of
data available for comparison. However, in order to make a fair comparison the
testing conditions should be as similar as possible between tests. In reality this is
not achievable on actual roads, rendering it illogical to compare results collected
at different occasions. Although real-world tests cannot be replaced, the above
note favours testing through simulations. Based on this argument, the developed
algorithm has been assessed through simulation.
7.1 Simulation model
For the purpose of this thesis, a simulation model was developed in Simulink. A
simplified scheme of this model is illustrated in figure 7.1. To improve the simulation
results, some parts of the model are based on real data collected from tests involving
the vehicle being simulated. In the figure the main components are included to
illustratetheprimarycharacteristicsofthesimulation. However, intheactualmodel
the subsystems are highly interconnected and depend on a set of state parameters.
Furthermore, as to not clutter the scheme with interconnections, only the ones
required to emphasise the functionality of the model are included.
As can be seen from figure 7.1, the model consists of a main block that contains the
powertrain, brakes and the physical model of the vehicle. This block is essentially
an abstraction of the simulated vehicle. The internal combustion engine accepts a
torque request from the controller developed in this thesis and computes the actual
torque that appears at the clutch or torque converter, depending on transmission
type.
The transmission includes a slightly different gear selection logic than that available
tothegeneticalgorithmsinceitisofinteresttotesttheperformanceofthealgorithm
under conditions where the actual gear selection software cannot be used in the
prediction model for various reasons (e.g. it may be too heavy, too complex or
not even available). Although a direct advantage of GAs is that more complex
models can be employed, it is reasonable to argue that infinite model precision
cannot be achieved for most systems, if not all, making it a necessity to be able to
handlethearisingdiscrepancies. Bydeliberatelyintroducingdifferencesbetweenthe
51
|
Chalmers University of Technology
|
7. Algorithm evaluation
predictionandsimulationmodels, thisreasoningoffersjustificationforthesimplified
gear selection logic.
The powertrain of an HDV is very complex and sophisticated controllers have been
developed to control the various parts. In some of the previous studies involving
look-ahead control of HDVs where the engine torque (indirectly) was one of the
optimisation variables, the actual control of the engine was routed via either the
standard cruise controller or even an interface to the driver (see for example [5, 37]).
A major reason for this is that the look-ahead controller did not have to take engine
oscillations and other undesired effects into account. The obvious drawback is the
decreased ability to control the engine torque output with precision. The developed
controller contains logic to enforce smooth driving, but it does not take into account
the finer characteristics of the engine and powertrain. Despite this fact and the
remark preceding it, the simulation model is based on direct control of the engine
torque from the controller.
v
max Brake Controller
v
des Brake torque
Brake Model
Torque request Actual torque
Controller ICE Transmission
Longitudinal
Dynamics
Model
Environment
Figure 7.1: Simplified scheme of the simulation model developed for testing the
algorithm. Solid lines indicate either a requested or actual torque, whereas dashed
lines indicate interconnections of particular importance. v and v represent
max des
dynamic reference speeds and are externally provided to the controllers (e.g. from
a driver).
52
|
Chalmers University of Technology
|
8
Results
To generate the results presented below, the simulation model from section 7.1 was
used. As the final algorithm relied on two different solvers, the output from the
initial QP-solver alone was first considered. After that, the complete algorithm was
assessed and compared to the QP-solver. The assessment was done with respect to
short and representative road segments, long simulations with real road data, time,
and algorithm predictability. The parameters and constants used in the simulations
are presented in appendix C.
8.1 Evaluation of QP-solver
Asdescribed,thefullversionofthealgorithmtakesadvantageoftwodifferentsolvers
by selectively applying them to the problem at different stages, taking advantage
of their different strengths. Despite the fact that the initial QP-solver have been
referred to as ”pre-solver”, it does in general output a solution that is optimal
with respect to its prediction model. As no fair comparison can be made between
simulation results and real-world tests, the most important part of the evaluation of
the algorithm is to investigate the gain of applying the second solver (i.e. the GA).
To do so, this section is dedicated to investigating the control signal as proposed by
the QP-solver alone.
8.1.1 QP-solver performance for constant driving
Constant slope is equivalent to flat road with an additional constant force acting on
thevehicle, anditisthusonlynecessarytoconsideraflatroadinthecaseofconstant
driving. At each sample point, the QP-solver presents the predicted optimal control
strategy and the predicted speed of the vehicle for the prediction horizon. For the
case when both the initial and desired speed is 72km/h, the predicted torque and
speed profiles are those presented in figure 8.1
The solver displays the desired main traits of smoothness and unbiased reference
speed tracking under static driving, but also should it be noted that there is the
undesired torque and speed drop towards the end of the prediction interval. This
drop is not a property of the QP-solver, but a direct consequence of the formulation
of the cost function in equation (4.5). For the sake of comparison with literature,
53
|
Chalmers University of Technology
|
8. Results
800
700
600
500
400
300
200
100
0
0 200 400 600 800 1000 1200 1400 1600
Distance (m)
73
72
71
70
69
68
67
66
0 200 400 600 800 1000 1200 1400 1600
Distance (m)
Figure 8.1: The torque and speed profile of the HDV for the case with v =
0
72km/h. What should be noted is that the algorithm manages to keep the vehicle
at a nearly constant speed very similar to the desired velocity. Furthermore, there
is a noteworthy drop in torque, and thus speed, towards the end of the prediction
horizon.
this undesired trait has not been rectified in the following results.
Figure 8.1 does not illustrate the trajectories actually taken by the vehicle, but
merely the predicted engine torque output that minimises the cost function given
by equation (4.5). The actual speed and torque trajectories are presented in figure
8.2 which displays a slightly different behaviour than the prediction as well as no
drop in speed and torque towards the end of the travelled interval.
In conclusion, the simulated characteristics of the QP-solver are close to the pre-
diction. Also, the figure displays clear tendencies of compensating for prediction-
and simulation model differences as the algorithm initially increases the torque to
compensateforthespeeddropandthenkeepstorqueandspeedessentiallyconstant.
8.1.2 QP-solver performance for varying road slope
In this section two more road profiles are considered to investigate the performance
of the QP-solver alone. These road profiles are a crest (figure 8.3) and a dip (figure
8.4), where the latter has been generated by reflecting the crest in the horizontal
axis. These two types of roads are chosen as they illustrate the main characteristics
of the solutions output by the algorithm.
Figure 8.3 displays the characteristic behaviour of anticipatory driving; the vehicle
increases its speed ahead of a demanding ascent that it will not be able to climb
without loosing momentum. To save time it accelerates to the set speed as it arrives
54
)mN(
euqroT
)h/mk(
deepS
|
Chalmers University of Technology
|
8. Results
73
72
71
70
69
68
67
66
0 200 400 600 800 1000 1200 1400 1600
Distance (m)
800
700
600
500
400
300
200
100
0
0 200 400 600 800 1000 1200 1400 1600
Distance (m)
Figure 8.2: Actual speed- and torque trajectories. Still, the solver displays a
smooth behaviour and good reference speed tracking. Initially there is a small
unforeseen drop in the speed and after compensating for the lost speed a steady
state torque is found.
at the top of the hill. Approximately halfway into the flat plateau, the speed is
reduced as the algorithm predicts a large increase in speed when arriving at the
descent. As the engine brake is not native to the QP-solver, it first enters what is
known as eco-roll mode1 and a bit into the descent the engine brake is engaged by
transforming the requested foot brake torque to engine brake torque in the after-
treatment of the solution output by the solver.
In figure 8.4 essentially the opposite situation as in figure 8.3 is presented. Initially
the vehicle requests a mix of no torque and negative torque and remains in these
modesabitintotheflatsegmentwhileitapproachesthereferencespeed. Identifying
the upcoming ascent the torque is then increased, triggering a downshift a few
hundred meters ahead of the foot of the hill.
Essentially the algorithm behaves much the same way as should be expected based
on engineering heuristics. However, it makes use of high-resolution control signals
to very precisely control and predict the trajectory of the vehicle.
1Disengaging the engine, rolling on neutral gear.
55
)mN(
euqroT
)h/mk(
deepS
|
Chalmers University of Technology
|
8. Results
8.2 Performance of hybrid algorithm
The performance of the algorithm must be judged both with respect to its ability to
lower the fuel consumption, but also its local behaviour as it faces non-constant road
slopes. The algorithm is assessed both by evaluating the local performance in the
situations presented above, but also by using real road data in longer simulations.
8.2.1 Hybrid algorithm torque trajectory
The torque sequence predicted by the QP-solver alone was presented in the top
panel of figure 8.1. Feeding this solution to the developed GA where the population
size and number of generations have been set to 20 and 300, respectively, results in
the solid line in figure 8.5. For convenience, the QP-solver’s output is also included
as a dashed line. The prediction is for perfectly flat ground and a preview horizon
of 1600 m with sample points uniformly distributed. What should be noticed is
that after applying the GA the resulting solution displays slow oscillations with
an amplitude of less than 10 Nm. The oscillations are smooth and due to their
small amplitude they would not be felt by the driver. Oscillations are generally
undesirable, but as will be seen in the following sections, dynamic torque saves fuel
even on flat ground as compared to when only the QP-controller is used. Thus,
these modest oscillations are caused by the controller having information about how
the working point of the engine affects the fuel consumption and actively taking this
information into account when planning the trajectory.
1000
800
600
400
200
0
-200
-400
0 10 20 30 40 50 60 70 80
Sample point
Figure 8.5: The optimal torque as predicted by the hybrid algorithm shown in
solid. The warm start solution supplied by the QP-solver is shown as a dashed line.
57
)mN(
euqroT
|
Chalmers University of Technology
|
8. Results
8.2.2 Analysis of the behaviour of the hybrid algorithm for
constant and varying road slope
Forthesakeofcomparison, thethreepreviouslystudiedsituationsarepresented(i.e.
flat road, a crest, and a dip). All parameters are the same and the only addition to
the algorithm is the inclusion of the GA on top of the QP-solver.
In figure 8.6 the case with flat ground is presented. The top panel describing the
vehicle speed shows no noticeable deviations away from the desired speed. Fur-
thermore, the inclusion of the GA has lead to the disappearance of the initial and
very slight drop in speed witnessed for the QP-controller. Unlike the speed, the
simulated torque output from the engine is non-constant. This dynamic behaviour
is in view of figure 8.2 accredited to the genetic algorithm, indicating that in the
case of static driving on flat ground, the algorithm does not enter a steady state in
the strict meaning of the word. However, the torque variations are very small and
happening very slowly, making them unnoticeable to the driver.
72
70
68
66
0 200 400 600 800 1000 1200 1400 1600
Distance (m)
800
600
400
200
0
0 200 400 600 800 1000 1200 1400 1600
Distance (m)
Figure 8.6: Simulated torque and vehicle speed when using the hybrid algorithm.
From the bottom panel it is clear that the torque is dynamic and does not enter
a steady state, unlike the simulation with the QP-solver on flat ground shown in
figure 8.2. However, the weight of the truck and the small relative amplitude causes
the speed to appear constant.
When faced with a non-constant slope as in figure 8.7 it can be seen how the vehicle
first accelerates to enter the ascent with a kinetic energy reserve. The travelled hill
is too steep and long for the engine to be able to maintain the set speed and the
vehicle arrives at the crest with a somewhat lower speed. On the plateau the vehicle
initially speeds up to attain set speed, but a bit before the downhill it smoothly
reduces the torque, triggering the gearshift to happen a bit earlier than for the QP-
controller, and applies the engine brake to save fuel. In contrast to the QP-solver,
58
)mN(
euqroT
)h/mk(
deepS
|
Chalmers University of Technology
|
8. Results
the engine brake is native to the GA and the engine brake fully replaces the eco-roll
mode proposed by the QP-solver for the same trip. The engine is then being fully
dragged the rest of the way and after the maximum allowed speed (according to the
algorithm) is reached, it is kept constant by applying the foot brake, which is not
visualised. The net result is that no fuel is consumed during the second half of the
interval.
80
70
60
0 200 400 600 800 1000 1200 1400 1600
Distance (m)
2000
1000
0
0 200 400 600 800 1000 1200 1400 1600
Distance (m)
20
10
0
0 200 400 600 800 1000 1200 1400 1600
Distance (m)
Figure 8.7: Simulated result of applying the hybrid algorithm are shown in solid.
Figure 8.3 is superimposed as dashed lines. As the simulation starts only 100 m
before the demanding ascent, the best strategy is to give full throttle, but as the hill
has been climbed the two algorithms chooses different strategies. v = 72km/h.
des
In a comparison between figure 8.8 and the corresponding figure 8.4 illustrating the
case when only the QP-solver is employed, it should be noted that although the
speed trajectories are very similar, the engine torque requested by the two versions
of the algorithm differ. A noteworthy difference is that the GA manages to postpone
thegearshiftwithoutanymeansofcontrollingthegearshiftinglogic, indicatingthat
the inclusion of gear prediction affects the final behaviour of the vehicle and endows
the algorithm with extended control capabilities, although only indirect ones. In the
simulation this postponement is achieved by a more modest torque increase than
the corresponding increase requested by the QP-algorithm, the trade-off being a
marginal decrease in average speed.
59
)mN(
euqroT
)h/mk(
deepS
)m(
edutitlA
|
Chalmers University of Technology
|
8. Results
80
70
60
0 200 400 600 800 1000 1200 1400 1600
Distance (m)
2000
1000
0
0 200 400 600 800 1000 1200 1400 1600
Distance (m)
0
-10
-20
0 200 400 600 800 1000 1200 1400 1600
Distance (m)
Figure 8.8: Simulated behaviour for the case when the hybrid algorithm is faced
with a significant dip in the road profile. The solid line represents the hybrid algo-
rithm while the corresponding output from the QP-solver has been superimposed
as dashed lines. v = 72km/h.
des
8.3 Numerical comparison for short-distance per-
formance
To be successful in handling real driving situations, it is of great importance that
the algorithm can handle the representative segments presented above well. There
are many factors that determine the performance of the algorithm, some of which
are driver comfort, fuel consumption, and travel time. The driver comfort has
been addressed directly in the algorithm, but this section is exclusively concerned
with the fuel consumption and travel time. To this end table 8.1 presents the
fuel consumption and travel time for both algorithms faced with the above road
topographies. For the stochastic GA, the simulations have been run 10 times each
and then averages have been computed.
Table 8.1: Simulation results for the QP-solver (QP) and the hy-
brid algorithm (GA) for the three road profiles. Data is given as
<fuel_consumption[l/100km]>/<average_speed[km/h]>.
Road profile QP GA Difference (%)
Flat 31.7808/71.91 31.7647/71.89 -0.05/-0.023
Crest 47.3880/71.66 45.2624/70.99 -5.69/-1.21
Dip 39.0651/74.01 37.4764/73.45 -4.29/-0.43
For all three topographies the GA displays a reduced fuel consumption, ranging
60
)mN(
euqroT
)h/mk(
deepS
)m(
edutitlA
|
Chalmers University of Technology
|
8. Results
from 0.05% to 5.69%, when compared to the QP-solver. In terms of travel time,
the GA is a little bit slower on all segments. The biggest difference is for the crest,
where the hybrid controller favours fuel savings over travel time particularly much.
The numerical values indicates that the addition of the GA saves fuel, but it should
be clearly emphasised that these values are merely indicators as they are generated
from artificial road segments and short travel distances. Furthermore, the results do
not hold any information about whether the GA can reduce the fuel consumption
more than toady’s Active Prediction.
8.4 Large scale evaluation
In addition to evaluating the behaviour of the algorithms when faced with a specific
local road topography, its overall performance over long distances was assessed.
This evaluation was done by using recorded road data for the highway connecting
Södertälje and Norrköping, measuring approximately 100km.
Infigure8.9thecompletesimulationsareshownintermsofspeed, enginetorqueand
altitude. As for the engine torque, the solutions are very similar. Both algorithms
offersmoothtorquechangestoensuredrivercomfort, butthedifferentcostfunctions
andpredictionmodelsusedforthetwooptimisationalgorithmshavecausedtheGA-
solution to deviate from the QP-solution used as starting point. In turn this has
shifted some of the gear shifts, identified by the sudden drops in engine torque. The
numerical values from the simulations are presented in table 8.2. From the table
it is evident that the GA-controller saves fuel as compared to when only the QP-
controller is used. As regards the speed, the table indicates that the mean speed is
lower for the GA-controller than the QP-solver. But while this is true, it must be
noted that the desired speed is set to 72 km/h and that the average speed of the
GA-controller therefore is closer to the desired speed.
Table 8.2: Simulation results for the QP-solver (QP) and the hybrid algorithm
(GA) for the Södertälje-Norrköping segment. The total distance simulated is 100km
and the average is formed by simulating the trip 5 times.
Algorithm Fuel consumption (l/100km) Average speed (km/h)
QP 34.4974 72.39
GA 34.4404 72.11
Difference (%) -1.63 -0.392
61
|
Chalmers University of Technology
|
8. Results
80
75
70
65
0 1 2 3 4 5 6 7 8 9 10
Distance (m) ×104
3000
2000
1000
0
-1000
0 1 2 3 4 5 6 7 8 9 10
Distance (m) ×104
40
80
20
75 0
-2700
-40
650 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 Distan5ce (m) 6 7 8 9 ×10140
Distance (m) ×104
3000
2000
1000
0
-1000
0 1 2 3 4 5 6 7 8 9 10
Distance (m) ×104
40
20
0
-20
-40
0 1 2 3 4 5 6 7 8 9 10
Distance (m) ×104
Figure 8.9: Panel 1 and 2 (from the top): Simulation result for the Södertälje-
Norrköping segment when using GA. There are clear deviations away from the set
speed in the vicinity of demanding ascents/descents, but under more static condi-
tions the vehicle closely tracks the reference speed. Panel 3 and 4: Simulation result
for the Södertälje-Norrköping segment when using the QP-solver alone. Compar-
ing panel 1 and 3, it can be seen that the strategies are similar but still notably
different. While the GA effectively utilises the engine brake and engine efficiency
information as well as a gear-prediction model, the QP shares the main traits with
the GA but lacks the high-resolution finesse exhibited by the GA. Panel 5: Altitude
of the travelled road.
62
)mN(
euqroT
)mN(
euqroT
)h/mk(
deepS
))mh(/m
ekd(u
dtitelAepS
)m(
edutitlA
|
Chalmers University of Technology
|
8. Results
8.5 Average performance of genetic algorithm
As a measure of the performance of the algorithm, it is also evaluated based on the
spread of the results in fuel-speed-space. To increase the number of data points, the
simulation was run 60 times, but only on the first 25 kilometers of the Södertälje-
Norrköping segment. The results are presented in figure 8.10. The averaged average
speed of the hybrid algorithm is 0.35% lower than that of the QP-controller, while
the average fuel consumption is lowered by 1.73%. Evidently, the cluster generated
from simulation with the genetic algorithm is different from the simulation with only
the QP-solver active, both in terms of trip time and fuel consumption. This makes
it more difficult to determine the exact effect of the inclusion of the GA on fuel
consumption or average speed alone. However, while the desired speed for either
one of the algorithms could have been adjusted in order to enforce similar average
speeds, this option was discarded in favour of having the two algorithms use the
same set of values for the parameters shared between them.
Furthermore, while the average speed is lower for the hybrid solver, it should again
be noted that the utilisation of the hybrid solver leads to average speeds closer to
the desired speed, but with all values slightly exceeding it, thus guaranteeing that no
timeislostwithrespecttotravellingatthesetspeed. Fromthescatterplotitshould
also be noted that the data from the GA-controller exhibits traits of predictability
as the corresponding data points form a dense group with low variance.
72.6
QP
Hybrid algorithm
72.55
72.5
72.45
72.4
72.35
72.3
72.25
35.1 35.2 35.3 35.4 35.5 35.6 35.7 35.8
Fuel consumption (l/100km)
Figure 8.10: Scatter plot of the fuel consumption and average speed for the two
algorithms running on the first 25 kilometers of the Södertälje-Norrköping segment.
8.6 Computational footprint
The major computational footprint is that of the genetic algorithm but also the QP-
solver, which is not supported by MATLAB Coder and thus executed as an ordinary
MATLAB function call, adds to the computational time. The computational time
was measured with MATLAB’s built-in tic-toc function on a computer with an
Intel Core i7 (3.60GHz) processor. The graph in figure 8.11 represents the execution
63
)h/mk(
deeps
egarevA
|
Chalmers University of Technology
|
8. Results
0.42
0.41
0.4
0.39
0.38
0 0.5 1 1.5 2 2.5
Distance (m) ×104
0.0250
0.0140
0.030
0-.0120
0-.0210
00 00..55 11 11..55 22 22..55
DDiissttaannccee ((mm)) ×× 110044
20
10
0
-10
-20
0 0.5 1 1.5 2 2.5
Distance (m) ×104
Figure 8.11: Top panel: Execution time of the GA for the first 25 kilometers on
theSödertälje-Norrtäljesegment. Middle panel: ExecutiontimeoftheQP-solverfor
the first 25 kilometers on the Södertälje-Norrtälje segment. The red lines indicate
the mean computation time. Bottom panel: Altitude of travelled road. Included to
emphasise the computational times’ dependence on topography.
time for the hybrid as well as QP-algorithm when simulated on the first 25km of the
Södertälje-Norrköping segment. From the figures it can be seen that the execution
times for both solvers have small local variances which indicates predictable com-
putational time. Furthermore, it is evident that the average run-time of the GA is
approximately 20 times as long as that of the QP-solver.
The small local variance of the computational times helps emphasise the computa-
tional times’ dependence on local road topography. From the inclusion of the road
topography in the bottom panel it becomes clear that the computational time of the
GA increases with up to approximately 5%, or 0.02s, in the vicinity of the steepest
descent. For the QP-solver, the relative increase is in the range 25%, but due to
the shorter execution times the absolute difference is less than that of the GA.
64
)s(
emiT
)s(
emiT
)m(
edutitlA
)m(
edutitlA
|
Chalmers University of Technology
|
9
Discussion
In this thesis there was the direct goal of developing an algorithm capable of de-
creasing the fuel consumption while ensuring driver comfort and without changing
the trip time considerably. From the simulation results it is clear that the addition
of the GA improves the fuel efficiency as compared to applying the QP-solver alone
with maintained driver comfort. In figure 8.7 and 8.8, on the other hand, an ad-
ditional and very important trait is manifested. The developed controller cannot
control the gear directly, but nevertheless it is seen from the figures that it managed
to influence the gear shifts (i.e. both postpone and move shifts forward as compared
to the gear shifts observed for when only the QP-controller was employed). The
algorithm achieved this with the only instructions being to reduce fuel consump-
tion, drive smoothly and stay in the vicinity of the set speed; that is, without no
instructions of trying to control the gear. Although being able to indirectly control
the gear does not generalise to most vehicle control problems, the mere observation
of this behaviour implies that the algorithm is able to draw conclusions that have
not been included in the algorithm design. Importantly this characteristic endows
the algorithm with an ability that loosely may be referred to as a kind of reasoning.
9.1 Decoupling of cost function, prediction model
and solver
A result of what was described as reasoning in the previous section was an algorithm
that required less strict definitions of the optimisation problem. For the algorithm
to work, it only required access to a fitness function and it would stochastically
work its way towards the optimum in a directed search. A direct gain of this was
that the focus of the design process was moved away from how the goal should
be reached to what the goal should be. This is important since the cost function,
prediction model and solution method in general are highly coupled in conventional
mathematical optimisation and the responsibility of managing this coupling and
matching the problem formulation to the solver falls on the developers. Although
the cost functions of the QP-solver and GA were intentionally made very similar,
the above mentioned decoupling was clearly observed and taken advantage of.
65
|
Chalmers University of Technology
|
9. Discussion
9.2 Computational footprint
From the very beginning of the project GAs were known to be computationally
demanding and subject to an inherent risk of premature convergence or failing to
convergence. These characteristics were all observed in the development process
and carefully taken into consideration and consciously addressed. However, as the
algorithm shows signs of stochasticity in the final sequence of control signals, it
must be concluded that either there are more than one global optima or that the
algorithm fails to find the global optimum. Observations about the values of the
fitness function during operation implies the latter, but it should be noted that this
does not necessarily contradict the prior. It should also be taken into consideration
that very few generations and individuals were used in relation to the size of the
search space. Also, the vehicle prediction model is indeed a simplification and even
if the global optimum with respect to this model were to be found, it would not
make sense to claim that it is the true optimum.
Most effort was put into the task of adapting the algorithm to vehicle control.
However, throughout the design of the algorithm there was a permeating thought
of keeping the computational footprint low. Quite contradictory, but as a conse-
quence of their fast prototyping and extensive simulation capabilities, MATLAB
and Simulink were used as main tools, and the MATLAB coder was extensively
used to improve the execution performance. Despite significant improvement in
terms of computational speed, very much overhead is added by the coder, which
makes it difficult to use the computational resources to the fullest. The main com-
puter was equipped with a powerful Intel Core i7 processor rated at 3.60GHz, but
the algorithm could also run effortlessly on a laptop with a 2.4GHz Intel Core i5.
In the algorithm’s current form, however, it is deemed too demanding for on-board
operation. Despite this and in conjunction with the fact that the process of really
optimising the algorithm was not given a part in the project, no statement about
the suitability for on-board operation can be made.
9.3 Applicability to vehicle control
Disregarding the computational complexity, the algorithm shows potential to be
used in look-ahead control. While maintaining all the main traits of the previously
evaluated QP-solver (see [37, 38]), it manages to reduce the fuel consumption in the
simulations even further. As found in [37], the fuel consumption for a 28000kg truck
was lowered by 8.1% compared to a standard cruise controller; a number that thus
possibly could be increased a bit more with the addition of the developed algorithm.
As predictive cruise controllers are not the only longitudinal control systems that
require (or will require) on-board optimisation procedures, it is of relevance to assess
this algorithm’s applicability even for other systems. In general it is a difficult task
to include constraints in a genetic algorithm, which makes the quite constraint-
free predictive cruise controller well suited for prototyping. The indicators are that
66
|
Chalmers University of Technology
|
10
Conclusion
From the simulation results it is evident that the developed genetic algorithm leads
to improved fuel efficiency without notably altering the trip time when compared to
a conventional mathematical optimisation algorithm. A principal conclusion is that,
given the simulation model, the hybrid genetic algorithm is an improvement over the
QP-solver alone. As regards real-world implementation it can only be said that the
algorithmdisplayspotentialofbeingsuccessfullyappliedtoreal-timevehiclecontrol.
This conclusion can neither be rejected nor confirmed as regards the computational
resources available on board Scania vehicles. What can be confirmed, however, is
that the implementation as it is done in this thesis is too heavy for on-board real-
time operation, but it must be emphasised that the current implementation offers
much potential for optimisation.
Furthermore, with multiple objectives (i.e. smooth driving, fuel efficiency, and ref-
erence speed tracking) the algorithm was assessed from multiple perspectives. From
the fuel efficiency point of view, the proposed algorithm is an improvement over
the conventional QP-solver, even on flat ground where the QP-controller outputs
a steady torque and closely follows the reference speed. With only a very slight
decrease in fuel consumption on perfectly flat ground, it is concluded that the real
gain of adding the GA-layer to the look-ahead controller is observed in dynamic
slope situations. Although the end result is that the controller tends to increase the
speed ahead of ascents and conversely decrease the speed before upcoming descents,
very much like a skilled driver would, the inclusion of the algorithm in the loop
introduces the crucial difference of being able to optimise the realisation of these
strategies with high-resolution control signals. This behaviour was indeed displayed
by the QP-solver alone. However, from the results of this thesis, it is concluded
that there is potential for further improvements in terms of fuel consumption, as
compared to the QP-solver, by introducing empirically collected engine data and a
simplified gear-prediction model along with the genetic algorithm.
69
|
Chalmers University of Technology
|
11
Future work
Inthissectionanumberofsuggestionsoffutureworkarepresented. Thesuggestions
are both considered with potential improvements of the algorithm developed in this
thesis and continuations of the conceptual idea of using a genetic algorithm for
vehicle control.
11.1 Improving execution speed
• Addressingthecurrentapplicationandrevisitingtheexecutiontimespresented
infigure8.11, thereisasetofmeasuresavailabletodecreasethecomputational
burden, bothintermsofcomputationaltimeasaresultofusingadifferentlan-
guage, and by rewriting the actual methods without changing their behaviour.
A promising continuation would be to eliminate the overhead introduced by
the MATLAB Coder and port the code to pure C/C++. This also means that
the code can be written so as to utilise the embedded system in the optimal
way.
• Using the built-in code profiler in MATLAB it was confirmed that the eval-
uation function accounts for far more of the execution time than all other
functions. As the individuals are evaluated independently of each other the
evaluation function offers great potential for improvement through parallelisa-
tion.
• A complementary approach to improving the execution speed is to reduce the
actual complexity of the algorithm. As presented in the introductory chapters
of this thesis, this has been addressed from many perspectives, one of which is
to approximate the evaluation function. As the need to do so increases with
increasing function complexity, the approximation method must in general
be able to capture non-linearities and other complex traits of the evaluation
function. The full story falls outside the scope of this thesis, but suffice to say
that artificial neural networks constitute a group of methods that meet these
needs and are widely applied today.
71
|
Chalmers University of Technology
|
11. Future work
11.2 Improving and extending the algorithm
• As outlined in the beginning, investigating the applicability of the genetic
algorithm to vehicle control was the main focus in this thesis. The idea of
look-ahead control is nothing new, and under the assumption that there is no
interference from surrounding traffic, the optimisation problem is simplified as
the number of constraints significantly decreases and thus also the complexity
of the problem. The results do imply that the addition of the GA improved
fuel consumption, but to harvest more of the outlined potential more algo-
rithmically demanding situations should be considered in further studies. It
can not be said for sure, but there are indicators in this thesis that in order
to apply the algorithm to a problem of greater complexity than LACC with
sparse traffic the problem formulation should be revised so as to not increase
the search space dimension above the current 80 dimensions.
• Real value encoding was chosen over binary encoding partly because it signif-
icantly reduces the chromosome length. However, with 80 values per chromo-
some, the search space is still of significant dimensionality with respect to the
number of individuals and number of generations used in this thesis. Viewing
the developed controller as a path planner in torque space, it is reasonable to
borrow ideas from the field of pure path planning. For driver comfort it was
claimed that the torque should display smooth transitions. This in turn opens
up for the use of primitives1 that code for the torque output over a longer
distance than that of a single segment in the current algorithm. An example
of how this could be done is to generate a set of primitives offline. In the
algorithm, instead of coding for a single torque value, each gene codes for the
type of primitive as well as its ”amplitude”. The final torque trajectory is then
formed by placing the primitives one after the other.
• As the dynamics of HDVs are slow, the long look-ahead is crucial to even have
thepotentialtooptimisethetrajectory, nomatterthequalityofthealgorithm.
Typically the error of the predicted state of the vehicle increases the further
it is into the future due to error accumulation. This contradicts the use of
high-resolution data at the far end of the prediction horizon. Also, the most
important constraints in a vehicle-control problem of this type are likely to
be local (e.g. avoiding other vehicles or driving as close as possible to the
vehicle ahead). As a consequence, using high-resolution data and variables for
the whole prediction horizon leads to increased computational demand for no
gain. Originating from this observation, a promising approach is to develop
methods that only optimise the control sequence as far into the future as is
meaningful and approximate the cost associated with the far end of the look-
ahead horizon.
1Primitives are the smallest parts that a solution (i.e. torque trajectory) consists of.
72
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.