University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
UWA
|
is capable of dealing with complicated interrelationships caused by domino effects. Meanwhile,
extra consequences or risk factors, such as environmental concerns, human factors and safety
barriers, can be easily added to the proposed BN because of its flexibility.
Sensitivity studies on basic nodes and secondary tanker explosion and fire are conducted in
this research. The results of the sensitivity study show the following:
Smoking is the most dangerous ignition source at petrol stations and should be totally
banned.
Overfill is the most probable cause of release, while hose rupture may cause the most
catastrophic consequences.
The refuelling job from a tanker to a storage tank at a petrol station can be moved to
night time if possible as only a small number of people may be affected.
Tanker explosions and fires at petrol stations increase human losses. Therefore, safety
barriers for stopping tanker explosion and quick evacuation are important for human
safety.
Statistical data, numerical simulations and logical judgements were the three data sources used
in this study. Such combination of data sources decreases the uncertainty caused by data
shortage and improves the accuracy and reliability of BN quantification. However, for
quantification based on subjective judgements, adjustments are required based on the specific
conditions of different projects.
66
|
UWA
|
CHAPTER 4. GRID-BASED RISK MAPPING FOR EXPLOSION ACCIDENTS AT
LARGE ONSHORE FACILITIES
4.1 INTRODUCTION
In this chapter, a grid-based risk mapping method is developed to enable a more detailed
explosion risk analysis for large areas with complicated conditions. The proposed method
divides the target site into a number of girds of appropriate size and with simplified conditions.
Then, risk analyses can be conducted easily at each end of the grid, and finally, a risk mapping
can be depicted for the whole target area.
In the gas processing industry, not only would process facilities be damaged during an
explosion event, but severe human loss may also be incurred due to the large population and
complicated environment of residential areas if the gas facility is located close to residential
areas. For example, on 31 July 2014, a series of gas explosion occurred in Kaohsiung, Taiwan,
which caused 32 fatalities and 321 injuries. More than four main roads with a total length of
approximate 6 km were damaged and traffic was blocked for several months (Liaw, 2016). In
2013, another severe explosion occurred in storm drains in Qingdao, China, and caused 62
fatalities and 136 injuries (Zhu et al., 2015).
For risk analysis of such large areas under complex circumstances, it is difficult for traditional
macroscale analysis to consider all specific local details and deal with complicated conditions.
Therefore, a grid-based risk mapping method is developed to enable a more detailed explosion
risk analysis. A limit amount of research has applied grid-based risk analysis methods to
process safety. Pula et al. (2006) employed grid-based impact modelling to model and analyse
radiation and overpressures at different locations in the process area. Seo and Bae (2016)
applied a grid-based method to risk assessment of fire accidents in offshore installations.
Zohdirad et al. (2016) used the grid-based method to measure the risk from secondary grade
releases in order to determine the results’ accuracy of risk evaluations of releases.
Meanwhile, to conduct risk analyses of both process and residential areas, multiple
consequences, such as overpressure impacts, building damage, and human loss, need to be
considered. In order to consider multi-consequences and complex inter-relationships between
consequences and basic risk influence factors, the Bayesian network is also implemented for
the proposed grid-based method as a risk modelling tool.
67
|
UWA
|
4.2 MODELLING
The proposed grid-based risk profiling method consists of the following steps.
Gridding: Decide the grid size and collect information for each grid.
Modelling: Model BN based on risk scenarios and consequences concerned.
Quantification: Find data to quantify the established BN.
Analysis: Calculate probabilities of target nodes of BN.
Result: Output risk for each grid to conduct total risk mapping.
4.2.1 Grid-based Analysis
A grid-based risk analysis method is employed to enable better modelling and assessment of
explosion loads, building damage, and human loss at different locations in both the process
area and nearby residential areas. As shown in the Figure 4.1, the target area is divided into a
specific number of computational grids, and the risks are then evaluated at each grid.
Information of each grid needs to be collected according to related consequences. For instance,
building type has to be defined to estimate potential building damage, and similarly, the size
of the population of each grid affects the risk of human loss. The more consequences need to
be considered, the more information is required.
Figure 4.1 Example of gridding
68
|
UWA
|
4.2.2 Bayesian network Modelling
A BN is an illustrative diagram that contains nodes and links with conditional probabilities.
Figure 4.2 shows a BN of gas explosion events that is used to evaluate the risks of both building
damage and human loss. It is a simplified network with 9 nodes and 10 links, which represents
only the critical factors of explosion and other consequences. However, BNs are flexible, which
means that extra information, such as safety barriers, human errors, or environmental concerns,
can easily be added to the original network. The nodes and the states of each node are listed in
Table 4.1. The states of explosion loads are defined based on damage classifications introduced
by Lobato et al. (2009).
Wind Release
Wind Speed
Direction Severity
Congestion
Building Explosion
Population
Type Loads
Building
Human Loss
Damage
Figure 4.2 Proposed BN for explosion risks
Table 4.1 Nodes and states of the proposed BN
Node States
No. Name No. States
A Wind Direction 4 East; South; West; North
B Wind Speed 3 Low; Significant; High
C Release Severity 3 Major; Significant; Minor
D Congestion 3 High; Medium; Low
69
|
UWA
|
a: 0-0.024bar, “safety distance”
b: 0.0204-0.17bar, up to 50% destruction of buildings
E Explosion Loads 5 c: 0.17--.689bar, up to total destruction of buildings
d: 0.689-1.01bar, total destruction of building
e: > 1.01bar, probable death due to lung haemorrhage
F Building Type 4 Residential; Tank; Process facilities; No building
G Population 4 Large; Medium; Small; Little
H Building Damage 4 Major; Medium; Minor; No damage
I Human Loss 4 Major; Medium; Minor; Little
The reliability of the BN model is important to result accuracy of the proposed method. In order
to improve the reliability of the BN analysis, a few indices such as the Ranked Probability
Score (Epstein, 1969), the Weaver’s Surprise Index (Waver, 1948) and the Good’s Logarithmic
Score (Good, 1952) have been proposed. Different BN errors involving node errors, edge errors,
state errors, and prior probability errors in the latent structure can then be identified and
corrected. Detailed definitions and explanations of the indices can be found in Williamson et
al. (2000).
4.2.3 Quantification of Bayesian network
The quantification of a BN can be divided into two parts, finding the probabilities of the basic
nodes and defining the conditional probabilities of the inter-relationship between these nodes.
Quantification based on historical statistical data is the most convenient way. However, it is
difficult to find available data to quantify the inter-relationship between nodes for two main
reasons. First, most of the available cases only provide the consequences, such as fatalities or
estimated economical losses, of an explosion event, so inter-relationships between middle
nodes cannot be defined. Second, due to the complex structure of the proposed BN and the
large number of combinations of states involved, hundreds of detailed records are required for
sufficient quantification. Therefore, two other quantification methods, numerical simulation
and logical judgments, are applied in this study because of the limitations of the statistical data.
Quantification of Basic Nodes
The proposed BN has five basic nodes: wind direction, wind speed, release severity, building
type, and population. Information about wind direction and wind speed can be found from local
70
|
UWA
|
weather data resources online. As for the release severity, hydrocarbon release data from the
Health and Safety Executive (HSE) annual report (2016) is selected. Table 4.2 shows the HSE
recorded number of accidents from 2006 to 2015 and summarises the probability of each state.
The basic nodes of site information, such as building damage and population, for each grid
depend on the specific condition within the grid area and are decided by subjective judgments.
Table 4.2 HSE data of hydrocarbon releases
Year 06 07 08 09 10 11 12 13 14 15 Probability
Minor 113 110 93 95 109 82 58 70 47 49 58.33%
Significant 73 71 52 81 73 57 39 42 30 32 38.84%
Major 4 4 2 3 4 3 8 6 3 3 2.83%
Quantification of Inter-relationships
For quantification of inter-relationships, the proposed BN is divided into two sub-networks: a
sub-network of explosion loads including nodes A,B,C,D, and E and a sub-network of building
damage and human loss including nodes E,F,G,H, and I. As mentioned, numerical simulation
and logical judgments are used to quantifying the inter-relationships between nodes.
Similar to Chapter 3, for the inter-relationship between basic explosion factors and consequent
overpressures, numerical simulation using DNV PHAST is applied to provide data for
quantification. Seventy-two cases have been conducted in order to provide sufficient data for
such quantification. There are four steps to conducting a PHAST analysis: input data, build
model, perform calculation, and output result. The four steps are briefly introduced below and
more details about how to use PHAST can be found in PHAST manual (DNV GL, 2016).
Input data: include site map, weather conditions, and data for explosion analysis.
Build model: select analysis method and define explosion scenarios.
Calculate: define calculation scenarios and run simulation.
Output result: can be GIS outputs, result diagrams, and reports.
To quantify the inter-relationship of sub-network of building damage and human loss, logical
judgments are mainly used due to the limitations of the data. This kind of subjective judgment
is able to provide a certain level of accuracy and reliability when the logical relationship
between nodes is simple and clear. However, such quantifications require regular examination,
71
|
UWA
|
and if the site condition changes, adjustments are required to ensure the logical relationships
are up to date. Meanwhile, a confidence-based method can be used to reduce the uncertainties
of subjective judgments when logical relationships are complicated and uncertain (Huang et
al., 2015).
4.2.4 Calculation of Bayesian network
Calculation of sub-network of explosion loads
Figure 4.3 shows the sub-network of explosion loads. The three basic nodes are wind direction,
wind speed, and release severity. The release severity and wind speed define cloud sizes, while
location is decided by the wind direction. Then, the condition of congestion can be figured out
based on the cloud size and location. Finally, the frequency of each explosion load level is
calculated by the release severity and congestion conditions.
Wind Release
Wind Speed
Direction Severity
Congestion
Explosion
Loads
Figure 4.3 The sub-network for estimating explosion loads
This sub-network contains five nodes and five links. The prior probability of explosion loads
can be calculated using Equation 4.1.
4 3 3 3
P(E = a) = ∑∑∑∑P(E = a,A = A ,B = B ,C = C ,D = D ), (4.1)
i j k h
i=1 j=1k=1h=1
where P is the probability, E is the explosion loads, a is the state “a” of node E, A is the wind
direction, A is the states of node A, B is the wind speed, B is the states of node B, C is the
i j
release severity, C is the states of node C, D is the congestion, and D is the states of node D
k h
72
|
UWA
|
(see Table 4.1). Based on the theorem of BN (Nielsen and Jensen, 2009), the joint probability
can be decided by Equation 4.2.
𝑛
𝑃(𝑥 ,…,𝑥 ) = ∏𝑃(𝑥 |𝑃𝑎(𝑥 )) (4.2)
1 𝑛 𝑖 𝑖
𝑖=1
where 𝑃𝑎(𝑥 ) is the parent set of 𝑥 . The function remains an unconditional probability of
𝑖 𝑖
𝑃(𝑥 ) if there are no parents of 𝑥 . In this sub-network, the node of congestion has parents of
𝑖 𝑖
wind direction, wind speed, and release severity, and the node of explosion loads has parents
of congestion and release severity. Therefore, the following equation can be decided:
P(E = A,A = A ,B = B ,C = C ,D = D ) = P(E = A|D = D ,C = C )×
i j k h h k
(4.3)
P(D = D |C = C ,B = B ,A = A )×P(C = C )×P(B = B )×P(A = A ).
h k j i k j i
Calculation of sub-network of building damage and human loss
As shown in Figure 4.4, two consequences, building damage and human loss, are considered
at this stage. For building damage, only building type is applied as a basic factor. Different
types of buildings provide different resistant levels to the explosion overpressures. Then, the
total human loss is decided by explosion loads, building damage, and population within each
grid.
Building Explosion
Population
Type Loads
Building
Human Loss
Damage
Figure 4.4 Sub-network for estimating building damage and human loss
Similar to the sub-network of explosion loads, this network also has five nodes and five links.
Building damage has parents of building type and explosion loads, and the parents for human
loss are explosion loads, building damage, and population. Therefore, human loss can be
calculated by Equations (4.4) and (4.5).
73
|
UWA
|
5 4 4 4
P(K = Major) = ∑∑∑∑P(K = Major, E = E ,F = F ,G = G ,H = H ), (4.4)
i j k h
i=1 j=1k=1h=1
P(K = Major, E = E ,F = F ,G = G ,H = H )
i j k h
= P(K = Major|H = H ,G = G ,E = E )
h k i
(4.5)
×P(H = H |F = F ,E = E )×P(G = G )×P(F = F )
h j i k j
×P(E = E ),
i
where K is the human loss, E is the explosion loads, E is the states of node E, F is the building
i
type, F is the states of node F, G is the population, G is the states of node G, H is the building
j k
damage, and H is the states of node H (see Table 4.1).
h
4.2.5 Matrix Calculation and Result Display
In order to simplify the calculation process, the equations mentioned above are transferred into
a calculation of matrices. All the data from each node and inter-relationships are shaped into
forms of matrices. A MATLAB script is written to conduct the calculations between matrices
and give the value to each grid automatically.
For example, Figure 4.5 shows a simple illustrative BN of building damage. From this simple
BN, the probability of major building damage can be calculated by the following equation:
P(H = Major) = ∑5 ∑4 P(H = H , E = E ,F = F ) =∑5 ∑4 P(H |E ,F )×
i=1 j=1 1 i j i=1 j=1 1 i j
P(E )×P(F ) =P(H |E ,F )×P(E )×P(F )+P(H |E ,F )×P(E )× (4.6)
i j 1 1 1 1 1 1 1 2 1
P(F )+⋯+P(H |E ,F )×P(E )×P(F ).
2 1 5 4 5 4
Building Explosion
Type Loads
Building
Damage
Figure 4.5 A simple BN for estimating building damage
74
|
UWA
|
The MATLAB script used to transfer this equation to the matrix calculation is written as:
𝑎 = P(H |E ,F ),P(H |E ,F ),…,P(H |E ,F )] %P(H |E ,F )% (4.7)
1 1 1 1 1 2 1 5 4 1 i j
b = [P(E ),P(E ),…,P(E )] %P(E )% (4.8)
1 2 5 i
c = [P(F ),P(F ),P(F ),P(F )] %P(F )% (4.9)
1 2 3 4 j
P(H = Major) = sum(a’∙reshape(repmat(b,4,1),20,1)∙repmat(c’,5,1)) (4.10)
For each grid, a “for” loop is used to conduct the calculation automatically, and the result is
depicted with a 3D bar plot. As shown in Figure 4.6, the height of each bar represents the
probability of related states at each grid and a total risk profiling of target area is formed by the
combination of risks from all the grids. Such a result display provides a clear risk indicator for
each local area, and protection measures can be easily decided on based on the risk mapping.
Figure 4.6 Example of 3D result presentation
4.3 CASE STUDY
A case study was conducted to illustrate the proposed method. Figure 4.7 shows a GIS map of
a gas refinery factory with grids applied. This factory is surrounded by a residential area. From
Figure 4.7, it can be seen that the closest residential building is located only about 100–200 m
from a gas storage tank. Within this distance, consequences may be significant if an explosion
occurs. To conduct this case study, a 50 m*50 m grid size over a domain range of 2 km *2 km
is selected based on the result of mesh convergence (see Section 3.3). The BN model introduced
in Section 2 is applied.
75
|
UWA
|
Figure 4.7 GIS map of analysis area
4.3.1 Quantification of Bayesian network
Quantification of basic nodes
As mentioned, there are five basic nodes of the proposed BN. Data on wind direction and wind
speed are collected from a website that records local weather data daily, and all the information
from 2015 are collected and analysed. Four wind directions and three wind speeds are
considered in this study and their probabilities in 2015 are listed in Table 4.3. As to the release
severity, the probability of each of the states from the HSE database can be found in Table 4.2.
Table 4.3 Probabilities of wind direction and wind speed
Wind Wind
East South West North 3m/s 1.5m/s 0.1/s
Direction Speed
Probability 0.203 0.284 0.284 0.229 Probability 0.698 0.2 0.102
Site information is depicted by colours in Figure 4.8. Figure 4.8(a) shows population
information, with red, yellow, green, and blue representing large, medium, small, and little
populations, respectively. Similarly, Figure 4.8(b) describes building type with red, yellow,
green, and blue representing residential buildings, tanks, process facilities, and no buildings,
respectively. Then, Excel is used to read all the colours and output numerical data for further
analysis.
76
|
UWA
|
(a) Population (b) Building types
Figure 4.8 Site information
Quantification of inter-relationships
As mentioned, quantification of inter-relationships involves two parts. For the sub-network of
the explosion loads, DNV PHAST is applied to simulate explosion loads under different
conditions and provide data for BN calculation. The leak point is set at the tank that is nearest
to the residential area. Huang et al. (2016) developed a multi-level explosion risk analysis
method that can be used to screen the whole site, qualitatively determine the most dangerous
leak source, and then, quantitative analysis can be conducted based on the results of risk
screening. Table 4.4 indicates the input data for PHAST analysis.
Table 4.4 Input data for PHAST analysis
Material Hydrocarbon
Flammable mass in cloud 300 kg; 30 kg; 3 kg
Wind direction East; North; West; South
Wind speed 0.1m/s; 1.5m/s; 3m/s
Congestion High; Medium; Low
Explosion Load A: 70kPa+; B:20-70kPa; C: 2-20kPa; D:0-2kPa
After performing calculations using PHAST, a GIS output of gas cloud dispersion is displayed
first, and consequently, the congestion level can be decided based on the cloud size and location.
Figure 4.9 shows an example of GIS output of cloud formation. Based on the cloud size and
location from Figure 4.9, the congestion level for this scenario is defined as high. After a
77
|
UWA
|
residential buildings and process facilities, major damage is considered when explosion loads
are larger than level C. Storage tanks normally have higher resistance levels than residential
buildings and process facilities. Thus, medium damage is defined for storage tanks under level
C blasting loads. The same method is applied to quantifying inter-relationship between basic
nodes and human loss. The table of logical judgments between basic nodes and human loss is
too large and complicated to be described in detail.
Table 4.5 Inter-relationship between nodes E, F, and H
Residential E Major
Residential D Major
Residential C Major
Residential B Medium
Residential A Minor
Tank E Major
Tank D Major
Tank C Medium
Tank B Minor
Tank A Minor
Process facilities E Major
Process facilities D Major
Process facilities C Major
Process facilities B Medium
Process facilities A Minor
No structures E No
No structures D No
No structures C No
No structures B No
No structures A No
4.3.2 Results and discussion
Based on the equations in Section 2.4 and network quantification, the probability of each state
of explosion loads, building damage, and human loss can be figured out and output as a 3D
risk map that shows the risk level of each grid. Figure 4.11, 4.12, and 4.13 give the final results
of the probabilities of explosion loads, building damage, and human loss, respectively.
79
|
UWA
|
(a) State “Major” (b) State “Medium”
(c) State “Minor” (d) State “Little”
Figure 4.13 Risk mapping of human loss
Figure 4.13 describes the risks of human loss. From Figure 4.13(a), it can be observed that the
most dangerous region for human safety is located in the residential area close to the explosion
centre because of the large population that is assumed within that area. It can also be seen that
there is a gap between the residential area and the factory with a very low chance for major
human loss because no structures are present in that area. Therefore, if projectiles and fires are
not present in the explosion, evacuation to the area without buildings is probably a better choice
than sheltering inside the buildings within the dangerous region.
Figure 4.13(b) shows that even far from the explosion centre, there is still a chance for injury
and medium human loss can still happen. The main reason for this occurring is that the storage
tank is located too close to the residential area and partial building damage may happen within
82
|
UWA
|
1800 m under level “B” explosion loads. Therefore, careful design of protection barriers and
structure strengthening of buildings are required for human safety considerations.
Other than local structure strengthening methods to protect the building and human in related
areas, there are also some risk reduction methods can be applied to the petrol station directly
in order to reduce the total risk of explosion events. The reduction methods include both
consequence mitigation and likelihood reduction approaches. Measures for reducing explosion
risks are listed and briefly explained in Table 4.6 and 4.7 based on oil and gas facility design
and explosion barrier installations.
Table 4.6 Explosion risk reduction method from design aspects
Measures Description
Equipment Leak frequency is proportional to the number of process equipment on the
minimization platform. Therefore, as simple as possible process systems are desirable.
Inventory The inventory in the process system may be related to the duration of any
minimization leak, and to the time required for blowdown.
Flammable cloud size is determined by the leak dimension and the pressure
of the inventory. Reduced inventory pressure will reduce explosive cloud
Inventory
dimensions and the severity of the explosion event. It will also result in a
Pressure
lower inventory mass within the system which will give the potential for a
more rapid blowdown and reduced escalation consequence.
Errors in maintenance and operating procedures are important causes of
Operations and
leaks. The potential effect of improvements in these areas on the leak
maintenance
frequency is mainly judgmental at present, although human reliability
procedures
modelling may give some guide.
The ignition probability depends on the gas concentration and the ignition
Ventilation sources in this area. Free or forced ventilation is able to reduce the gas
concentration.
In general, the main ignition sources are welding/hot work, compressors,
Ignition source electrical equipment and engines/exhausts. Remove or minimize some of
minimization those sources is possible. For instance, lights can be switched when not
needed, or floodlights can illuminate hazardous areas from safer zones.
83
|
UWA
|
Highest overpressures in congested modules tend to arise when the ignition
point is at the furthest point from a main vent. Although there is potential
Ignition source
for ignition to occur at practically any point within the module, removing
location
other ignition sources away from such extremities will, to some extent,
lower the potential for high explosion overpressure to occur.
Explosion events are most likely to occur in congested areas, and therefore
Minimization
avoiding congestion in the modules can reduce both the probability of
of congestion
explosion and the overpressure if an explosion does occur.
Local fatalities may be avoided if the personnel in the area become aware
Emergency of a leak, by alarms or by their own observation, and escape from the area
procedures before the ignition occurs. This can be covered under emergency
procedures.
Table 4.7 Explosion risk reduction method based on barriers
An effective ESD system will limit the inventory released in an incident
Emergency
and therefore the size and duration any resulting fire. The location of the
Shut Down
ESD valves will determine the areas where each particular inventory could
systems (ESD)
be released.
A leak may be reduced by isolating it manually or using the ESD system,
Isolation and and depressurising the leaking section using the blowdown system.
blowdown Damage or fatality risk in escalation can be reduces by isolation and
blowdown, and sometimes the necessity of evacuation may be avoided.
Blast walls have long been used to protect adjacent areas from the effects
Blast wall of overpressure. These walls are designed to absorb blast energy by
displacement.
Detection measures can be used to identify hazardous conditions on the
Detection plant such as excess process pressure, an unignited release of flammable
device gas or a fire. Detection devices enable control or mitigation measures and
emergency response to be initiated.
The alarm system may allow operators to mitigate leaks before they ignite,
Alarm
or at least to evacuate the area.
In the process industry, the safety gap which is an open space, with no
Safety gap
congestion, deliberately placed in between congested process areas. The
84
|
UWA
|
absence of obstacles in a safety gap eliminates the fluid-obstacle
interaction thereby preventing the generation of turbulence. It can be very
effective in reducing pressures prior to the onset of detonation.
4.3.3 Mesh convergence
A mesh convergence study was conducted in order to determine an optimal balance between
accuracy and computational time. During the study of mesh convergence, all information from
each grid were put together to form a total input into the BN. Four sizes of grid, 200 m, 100 m,
50 m, 25 m, were tested and the results are listed in Table 4.8.
Table 4.8 Results from different mesh sizes
Grid Size
200 m 100 m 50 m 25 m
Load A 0.408 0.434 0.45 0.453
B 0.464 0.464 0.457 0.458
C 0.0514 0.046 0.0426 0.0403
D 0.0315 0.0269 0.0252 0.0247
E 0.045 0.0291 0.0255 0.0245
Building Major 0.118 0.0954 0.0608 0.0539
Damage Medium 0.122 0.084 0.0672 0.0635
Minor 0.33 0.248 0.178 0.165
No 0.43 0.572 0.694 0.718
Human Loss Major 0.134 0.094 0.0611 0.0574
Medium 0.167 0.119 0.086 0.0803
Minor 0.251 0.193 0.147 0.1356
Little 0.448 0.594 0.705 0.727
Figure 4.14 shows the result for explosion loads, building damage, and human loss from
different grid sizes. From Figure 4.14(a), there is not much difference among the four grid sizes
for the probabilities of explosion load levels. However, for scenarios of building damage and
human loss, the probabilities of each state shows a large difference until the grid size reduces
to 50 m. When the grid size is reduced from 50 m to 25 m, the difference of probability is
approximately less than 5%. Therefore, a grid size of 50 m*50 m is applied in this study.
85
|
UWA
|
4.4 SUMMARY
A more detailed grid-based risk mapping method for explosion events is proposed in this
chapter. This method uses a Bayesian network (BN) as a risk analysis tool to estimate the
consequences and related probabilities for each grid. Based on the results of all the grids, 3D
bar charts are formed to describe the risks of explosion loads, building damage, and human
loss.
A case study is conducted to demonstrate the applicability of the proposed method. From the
case study, it can be concluded that the method provides a more detailed risk analysis of a large
site with complex conditions. Meanwhile, the results of 3D risk mapping charts offer a clear
view of the potential risks, which is useful for risk and safety management during planning,
construction, and operation stages. A mesh convergence study was also conducted and a grid
size of 50 m*50 m was found to be most appropriate over a domain range of 2 km *2 km.
A simple BN with basic risk influence factors was constructed to evaluate the risks of explosion
loads, building damage, and human loss. The case study proved that BN is capable of dealing
with complicated inter-relationships between basic factors and consequences. Meanwhile,
since BN is flexible, extra consequences or risk factors, such as environmental concerns,
human factors, and safety barriers, can be easily added to the proposed BN.
87
|
UWA
|
CHAPTER 5. MULTI-LEVEL EXPLOSION RISK ANALYSIS (MLERA) FOR
ACCIDENTAL GAS EXPLOSION EVENTS IN SUPER-LARGE FLNG FACILITIES
5.1 INTRODUCTION
This chapter proposed a more efficient explosion risk analysis for super-large offshore facilities.
As the demand for natural gas increases, it becomes necessary to develop the offshore gas
reserves, which normally are located in small and remote areas. However, in those areas,
transporting the gas via a pipeline may not be feasible or may not be economically beneficial
to install. Therefore, a new kind of production facility called floating liquefied natural gas
(FLNG) has been proposed to make the development of small and remote fields in deeper water
possible. This kind of floating structure does not require much external support and allows for
the transformation of gas into a readily transportable form.
The FLNG facility is a multi-functional offshore structure that contains both gas processing
and liquefaction equipment as well as storage for the produced LNG (Aronsson, 2012). In order
to install all those processing, liquefaction, and storage units on a single ship, the FLNG ship
is designed to be super large, and the topside structure is highly congested. Figure 5.1 shows
the world’s first FLNG facility designed by Shell Global, the Prelude FLNG, which is 488 m
long and 74 m wide, weighing more than 600,000 tons fully ballasted, which is roughly six
times the weight of the largest aircraft carrier (Shell Global, 2016).
Figure 5.1 Shell Prelude FLNG (Shell Global, 2016).
88
|
UWA
|
Explosion risks are related to three critical conditions, which are confinement, congestion, and
ventilation. Since an FLNG facility processes and stores a large amount of flammable gas in a
relatively small and congested area compared to onshore LNG plants, higher explosion risks
exist on FLNG platforms. Meanwhile, compared to other congested offshore structures,
explosion events with much more severe consequences may occur due to the super-large space
on board, which allows a larger volume of gas cloud to be accumulated. Therefore, for this
kind of large and highly congested structure, explosion risks must be considered during the
design process and reduced to an acceptable level.
Among all the explosion safety assessment methods, an explosion risk analysis (ERA) is one
of the most widely used approaches to derive the accidental loads for design purposes. The
ERA has been extensively described by Vinnem (2011), and detailed guidelines on how to
perform ERA are provided by NORSOK Z013 (2001) and ISO 19901-3 (2014). Due to the
complex geometry and obstacles of the offshore structures, computational fluid dynamics
(CFD) tools such as FLACS (GEXCON, 2011) are normally involved in ERA. However,
Hocquet from Technip (2013) pointed out that one critical issue in applying ERA to FLNGs is
time constraints. Sufficient information to derive realistic design overpressures and accidental
loads depends on numerous CFD dispersion and explosion calculations, which normally
require an unacceptable computational time due to the large size and complex structures of
FLNGs and various uncertainties that must be considered.
This study aims at developing a multi-level explosion risk analysis method (MLERA) for
FLNGs, which classifies the FLNG into different subsections with different risk levels before
the detailed CFD simulations are conducted. The advantage of this method is to find out and
apply detailed calculations to the areas with the highest risks in order to shorten the CFD
computational time to a realistic and acceptable level. The MLERA includes three levels:
qualitative risk screening, semi-quantitative risk classification, and quantitative risk assessment.
Throughout the three levels of analysis, an exceedance curve of frequency versus overpressure
will be formed, and an ALARP (as low as is reasonably practical) method is used to decide if
the explosion risk is acceptable (NOPSEMA, 2015). The risk mitigations are required until the
explosion risk of the target area is as low as reasonably practical.
Another challenge in assessing explosion risks for an FLNG facility is that there are neither
design rules nor industry standards available, as FLNG is a new technology (Paris & Cahay,
89
|
UWA
|
2014). Current standards such as UKOOA (2003), HSE (2003), and API (2006) provide
detailed guidelines on how to perform offshore explosion analysis and describe the analysis
process. However, as most of those guidelines were proposed based on fixed platforms, it may
not be appropriate to completely follow those standards to conduct an explosion risk analysis
for FLNG platforms. For example, if the risk screening process used for fixed platforms is
extended straightforwardly to FLNG facilities, all FLNG platforms remain at the highest risk
level, which makes the risk screening process useless.
Therefore, other than the traditional contributors from current standards such as confinement,
congestion, and ventilation, safety barriers are also involved in the risk screening and
classification processes of the proposed method as extra risk indicators since the current design
standards for normal offshore platforms are not sufficient for assessing the explosion risks of
super-large offshore structures. Safety barriers are normally used for both likelihood reduction
and consequence mitigation. Some of the important safety barriers used in the MLERA are
listed and briefly introduced in the following section.
5.2 MULTI-LEVEL EXPLOSION RISK ANALYSIS (MLERA)
A multi-level explosion risk analysis (MLERA) method is proposed by implementing a multi-
level risk assessment method into the traditional ERA method for offshore platforms.
The multi-level risk assessment method is extended from the framework used by the
Department of Planning & Infrastructure of New South Wales Government (2011), which was
used to formulate and implement risk assessment and land-use safety planning processes. It
aimed at ensuring that the risk analysis is conducted within an appropriate cost and timeframe
and is still able to provide high-quality results for the assessments. To achieve that, both
qualitative and quantitative approaches are required. Some key aspects of the three levels of
analysis from NOPSEMA (2012) are shown in Table 5.1.
Table 5.1 Key Aspects of Multi-Level Risk Analysis
Level 1: Preliminary qualitative risk screening
Likelihood and consequence are expressed on a scale and described in words.
There is no numerical value for risk output.
Often used as a preliminary risk assessment or screening tool.
90
|
UWA
|
Rapid assessment process and relatively easy to use.
Level 2: Semi-quantitative risk classification and prioritization
Generate a numerical value, but not an absolute value of risk.
Provides greater capacity to classify between hazards on the basis of risk.
Better for evaluating cumulative risk.
Level 3: Detailed quantitative risk assessment
Provides a calculated value of risk based on estimates of consequence (usually
software modelling) and likelihood (estimates based on failure rate data—site or
industry).
Good for more complex decision making or where risks are relatively high.
More time intensive and expensive than other methods.
The traditional ERA for offshore platforms is one of the most widely used approaches to derive
the design accidental loads for design purposes. The ERA has been extensively described by
Vinnem (2011), and detailed guidelines on how to perform ERA are provided by NORSOK
Z013 (2001) and ISO 19901-3 (2014). As mentioned before, one of the critical issues of
applying ERA to FLNG platforms is the unavailability of the long computational time. Due to
the huge size of the FLNG facilities, numerous CFD dispersion and explosion simulations are
required in order to acquire sufficient data to derive realistic design explosion loads.
Therefore, the multi-level method is used to improve the ERA process to decrease the
computational cost to a reasonable and acceptable level. The proposed MLERA method is a
systematic risk analysis approach that includes three assessment stages, which are qualitative
explosion risk screening as the first level, semi-quantitative explosion risk classification as the
second level, and quantitative explosion risk analysis as the third level. It aims at providing an
appropriate risk analysis method for explosion accidents on offshore super-large structures
such as FLNG facilities.
In regards to the key aspects in multi-level risk analysis as given in Table 5.1, brief descriptions
of the proposed MLERA for FLNG platforms and related analysis features of each level are
listed below, and detailed explanations of each step will discussed in the following section.
Level 1: Qualitative risk screening
Qualitative description of critical risk contributors
91
|
UWA
|
Taking each FLNG facility as a whole as the analysis object
Using a risk matrix diagram to rank the risk level of an FLNG platform
Level 2: Semi-quantitative risk classification
Using a score and weight system to quantify each risk contributor
Estimating the risk of each FLNG subsection
Classifying the subsections by using a cumulative density function diagram
Level 3: Quantitative risk assessment
Combining ERA and FLACS to figure out the quantitative result of explosion
frequency and consequences
Assessing the subsection with the highest risk levels. The number of the subsections
requiring detailed assessment depends on the results from the analyses at the first two
levels.
The final result is indicated by an overpressure versus frequency exceedance curve.
The ALARP concept is used to check if the explosion risk of the corresponding
subsection is as low as reasonably practical.
Meanwhile, the proposed MLERA considers not only normal risk contributors such as
congestion, confinement, and ventilation but also safety barriers. Some of the safety barriers
that are involved in the proposed method are briefly introduced in Table 5.2.
Table 5.2 List of Explosion Safety Barriers
Blast relief The overpressure can be diverted away from potential escalation sources such
panels as blast relief panels. Blast relief panels will open quickly during an explosion
in order to reduce peak overpressures.
Emergency An effective ESD system will limit the inventory released in an incident and
shut down therefore the size and duration of any resulting fire. The location of the ESD
systems valves will determine the areas where each particular inventory could be
(ESD) released.
Isolation and A leak may be reduced by isolating it manually or using the ESD system and
blowdown depressurizing the leaking section using the blowdown system. Damage or
fatality risk in escalation can be reduced by isolation and blowdown, and
sometimes the necessity of evacuation may be avoided.
92
|
UWA
|
Blast wall Blast walls have long been used to protect adjacent areas from the effects of
overpressure. These walls are designed to absorb blast energy through
displacement.
Water deluge Deluge has been found to be suitable for reducing overpressure in congestion-
generated explosions. If explosion mitigation is considered critical, a deluge
flow-rate of at least 13-15 L/min/m2 is recommended for general area
coverage.
Artificial Artificial ventilation is defined as that ventilation that is not supplied from the
vent action of the environmental wind alone. Upon detection of flammable gas, the
standby fan(s) should be started to give maximum possible ventilation in order
to aid dilution of the leak to prevent or limit the generation of an explosive
cloud.
Inert gas Inert gas can be used to dilute the flammable mixture by flooding the volume
within which the gas has been detected with, for example, CO or N . The
2 2
explosive gas can then be taken below its lower explosive limit.
Detection Detection measures can be used to identify hazardous conditions on the plant
device such as excess process pressure, an unignited release of flammable gas, or a
fire. Detection devices enable control or mitigation measures and emergency
response to be initiated.
Alarm The alarm system may allow operators to mitigate leaks before they ignite or
to at least evacuate the area.
Soft barriers Progress is being made in the manufacture of soft barriers such as the micro-
mist device, which consists of a cylinder of superheated water that is released
quickly as a fine mist in response to pressure or flame sensors during an
explosion. This device suppresses the explosion and significantly reduces
overpressures.
Safety gap In the process industry, the safety gap is an open space with no congestion,
deliberately placed in between congested process areas. The absence of
obstacles in a safety gap eliminates the fluid-obstacle interaction, thereby
preventing the generation of turbulence. It can be very effective in reducing
pressures prior to the onset of detonation.
93
|
UWA
|
5.2.1 First Level: Qualitative Risk Screening
The first level risk screening aims at defining the total qualitative risk level of an FLNG
platform and also offers a guideline for the next two levels of explosion risk analysis. In the
first level assessment of risk screening, not only are traditional risk screening indicators
considered, but safety barriers and design, operation, and maintenance philosophies are also
used to define a relative risk level for FLNG because the explosion risk will always be high if
only traditional risk screening methods are used for this kind of super-large and highly
congested structure. Based on API (2006) and UKOOA (2003), most of the qualitative risk
indicators of the traditional risk screening process are listed in Table 5.3.
Table 5.3 Traditional Risk Screening Indicators from Explosion Risk Standards
Consequence:
Low consequence: Low congestion level due to the low equipment count, being
limited to wellheads and manifold with no vessels (i.e., no
associated process pipework)
No more than two solid boundaries, including solid decks
Unattended facilities with low maintenance frequency, less
frequent than 6-weekly
Medium Medium congestion level due to the greater amount of
consequence: equipment installed compared to the low case
Higher confinement level than that for the low case
Unattended facilities with a moderate maintenance frequency,
more frequent than 6-weekly
A processing platform necessitating permanent manning but
with low escalation potential to quarters, utilities, and control
areas located on a separate structure
High consequence: High congestion level due to the significant processing on
board, which leads to a high equipment count
High confinement level of the potential gas release point
Permanent manning with populated areas within the
consequence range of escalation scenarios
Likelihood:
Low likelihood: Low equipment and inventory count, which align closely with
the consequence scenarios
Low frequency of intervention, less frequent than 6-weekly
No ignition sources within the potential gas cloud
94
|
UWA
|
Medium likelihood: Greater amount of equipment installed than for the low
likelihood
Medium frequency of intervention, more frequent than 6-
weekly
Weak ignition sources, such as a hot surface, exist within the
potential gas cloud.
High likelihood: A high equipment and inventory count
Permanently manned installations with frequent processing on
board
Strong ignition sources exist within the potential gas cloud.
Table 5.4 describes the more in-depth risk screening process that uses safety barriers and design,
operation, and maintenance philosophies as screening contributors. A modified risk matrix
diagram is illustrated in Table 5.5. From the modified diagram, it can be seen that only a relative
risk category is defined, and the results from this category will be used as a guideline for the
further assessment levels of the proposed MLERA.
Table 5.4 Risk Indicators Based on Safety Barriers
Consequence
No. Risk Level Description
A Moderate Safety barriers covering most or all parts of the
FLNGs
High design capacity of the structure to deal with
dynamic pressure, overpressure, missiles, and strong
shock response. No or minor structural damages
would occur.
B Major Safety barriers covering the structural critical
elements only
Medium design capacity of the structure to deal with
dynamic pressure, overpressure, missiles, and strong
shock response. A medium level of structural
damages would occur without affecting the structural
integrity.
C Catastrophic No or only safety barriers for human living quarters
Low design capacity of the structure to deal with
dynamic pressure, overpressure, missiles, and strong
shock response. Significant structural damages would
occur and would affect the structural integrity.
Likelihood
No. Risk Level Description
95
|
UWA
|
1 Almost No or only safety barriers for human living quarters
certain Low level of operation and the maintenance
philosophy corresponding to a level considerably
worse than industry average
2 Likely Safety barriers covering only critical potential release
points
Medium level of operation and the maintenance
philosophy corresponding to the industry average
3 Possible Safety barriers covering all or most of potential
release points of the FLNG structure
High level of operation and the maintenance
philosophy corresponding to the best standard in
industry
Table 5.5 Risk Matrix Diagram for Further Risk Screening of FLNGs
Consequence of Failure
Likelihood of
Moderate Major Catastrophic
Failure
A B C
Relatively medium
Almost certain 1 Relatively high risk Relatively high risk
risk
Relatively medium
Likely 2 Relatively low risk Relatively high risk
risk
Relatively medium
Possible 3 Relatively low risk Relatively low risk
risk
5.2.2 Second Level: Semi-Quantitative Risk Classification
In this section, the second level of semi-quantitative risk classification is introduced. The
analysis at this level estimates the risk level of each subsection of an FLNG facility in order to
provide an assessment prioritization for the third level ERA. A score and weight system is
applied to each selected risk contributor so that the subsections are able to be classified by
numerical values.
Only some of the main risk contributors for offshore explosion events are selected and briefly
described in Table 5.6. Each contributor is evaluated by two elements, weight and score. The
weight of each risk factor is subjectively defined by the author based on relative standards and
researches (API, 2006; UKOOA, 2003; Bjerketvedt et al., 1997). This may be adjusted by the
safety engineers according to their own experience and the practical conditions of their projects.
96
|
UWA
|
Table 5.6 Weight and Score of Explosion Risk Contributors
Risk Description Weight Score
Contributor
Equipment Leak frequency is proportional to the 3 = number of equipment count
count amount of process equipment on the
platform
Ignition In general, the main ignition sources are 7 = 3 if continuous ignition source exists
welding/hot work, compressors, = 2 if only discrete ignition source exists
electrical equipment, and = 1 if no or few ignition source exists
engines/exhausts. A weak, continuous
ignition source can sit and wait for the
gas cloud to reach its flammable range.
Flammable The higher the upper flammable limit of 4 = 3 if upper flammable limit > 40%
limit of a certain fuel, the easier it normally is to = 2 if upper flammable limit is between
process get a flammable cloud in the air. 10% and 40%
material Flammability limits for fuel mixtures = 1 if upper flammable limit < 10%
can be calculated by Le Chatelier’s law,
as shown in equation 5.1.
Congestion Explosion events are most likely to 10 = 3 if congestion is defined as high
occur in congested areas, and, therefore, = 2 if congestion is defined as medium
avoiding congestion in the modules can = 1 if congestion is defined as low
reduce both of the probability and
overpressure of an explosion event.
Table 5.7 defines the congestion level
based on the congestion classification of
Baker-Strehlow-Tang model (Baker et
al., 1996).
Fuel The higher the laminar burning velocity, 4 = 3 if laminar burning velocity > 75cm/s
reactivity the higher the explosion loads will be. = 2 if laminar burning velocity is between
45cm/s and 75cm/s
= 1 if laminar burning velocity < 45cm/s
Confinement The ignition probability depends on the 8 = 3 if the flame expansion is defined as 1D
gas concentration and the ignition = 2 if the flame expansion is defined as 2D
sources in this area. Low confinement is = 1 if the flame expansion is defined as
able to reduce the gas concentration. 2.5D or 3D
Distance to Distance to the target area may 7 = 3 if distance is smaller than 1/3 total
target area significantly affect the consequent load length of the structure
applied to the target area. = 2 if distance is smaller than 2/3 total
length of the structure
= 1 if distance is larger than 2/3 total
length of the structure
97
|
UWA
|
Table 5.7 Blockage Ratio Classification
Blockage ratio per layer
Obstacle layers
< 10% 10% - 40% > 40%
3 or more Medium High High
2 Low Medium High
1 Low Low Medium
100
𝐿𝐹𝐿 = (5.1)
𝑀𝑖𝑥 𝐶 ⁄𝐿𝐹𝐿 +𝐶 ⁄𝐿𝐹𝐿 +⋯+𝐶 ⁄𝐿𝐹𝐿
1 1 2 2 𝑖 𝑖
where C , C2, …, C [vol.%] is the proportion of each gas in the fuel mixture without air
1 i
(Kuchta, 1985)
Safety barriers are considered to be extra risk contributors in the semi-quantitative risk
classification process. All safety barriers are divided into three categories which are barriers
for likelihood reduction, consequence mitigation and for both. Based on the classifications,
safety barriers are given different weights as shown in Table 5.8. Score is decided by the
quantity of each barrier is applied to each module.
Table 5.8 Weight of Barriers based on Function Classifications
Barrier Classification Weight
Emergency shut down (ESD) system Likelihood reduction 6
Detection device Likelihood reduction 6
Water deluge Likelihood reduction 4
Inert gas Likelihood reduction 4
Safety gap Consequence Mitigation 3
Blast wall Consequence Mitigation 3
Blast relief panels Consequence Mitigation 3
Soft barriers Consequence Mitigation 3
Artificial Vent Both 9
Isolation and blowdown Both 9
Alarm Both 9
98
|
UWA
|
It can be seen from Table 5.8 that for safety barriers to reduce likelihood, two different weight,
6 and 4, are defined. This happens because although water deluge and inert gas are able to
reduce the flammable limit of the cloud and consequently prevent the explosion, they are
probably enlarge the consequence if explosion occurs. Inert gas can pose a significant
asphyxiation risk to personnel and water deluge without proper design may increase turbulence
of the affected area and enlarge the blast loads. Therefore, these two barriers are given lower
weight than normal prevention barriers unless careful design are presented.
Then, the total weighted score of each subsection can be calculated by equation 5.2:
𝑆 = 𝑆 −𝑆 (5.2)
𝑇 𝐶 𝐵
𝑛
𝑆 = ∑𝑤 𝑠 (5.3)
𝐶 𝑒𝑖 𝑒𝑖
𝑖=1
𝑛
𝑆 = ∑𝑤 𝑠 (5.4)
𝐵 𝑏𝑗 𝑏𝑗
𝑗=1
where 𝑆 refers to total weighted score for each subsection, and 𝑆 and 𝑆 are weighted scores
𝑡 𝐶 𝐵
of risk contributors and barrier functions, respectively, of each sub-section.
After the total score of each subsection is calculated, the total weighted scores are described
with a cumulative density function and are converted to a risk category with three levels, as
shown in Figure 5.2. The cumulative percentage is calculated from the total weighted scores
of all the subsections from the target FLNG platform.
100%
S1
90%
80%
y S2
t
i s 70%
n
e d 60% S3
e
v 50%
i
t
a l 40%
u
m
30%
u
C
20%
10%
0%
50 100 150 200 250
Total risk score
Figure 5.2 Cumulative density function (CDF) for total risk score.
99
|
UWA
|
First Level Second Level Third Level
Qualitative risk Semi-quantitative risk Quantitative explosion
screening classification risk assessment
Category S1:
Relatively low risk
10% of sub-sections
Relatively medium Category S2: Detailed CFD
risk 50% of sub-sections assessment
Category S3:
Relatively high risk
90% of sub-sections
Figure 5.3 Application procedure of MLERA
Figure 5.3 describes the analysis process of the proposed MLERA and explains which
subsections require third-level risk quantification. The first level risk screening process divides
the qualitative results into three risk levels, which are relatively low, medium, and high risks.
If the FLNG facility is categorized with a relatively low explosion risk level, only the
subsections with the highest risks, which belong to category S1 (top 10%), need additional
detailed quantitative explosion risk assessment. From Figure 5.2, it can be seen that for an
FLNG facility with relatively low explosion risks, the number of category S1 subsections is
two. Otherwise, if the relatively medium or high risk is assigned, subsections of categories S2
(50%) or S3 (90%), which are 10 and 18 subsections respectively, from Figure 5.2, require risk
quantification. Moreover, if all the subsections in one category fail the ERA, then the next level
subsections require further ERA as well.
5.2.3 Third Level: Quantitative Risk Assessment
This third level of quantitative risk assessment is a CFD software-based quantitative analysis
procedure. The process includes four main steps: leak frequency analysis, flammable gas
dispersion simulation, ignition probability modelling, and flammable gas explosion simulation.
100
|
UWA
|
Figure 5.4 shows the detailed quantitative analysis process applied to offshore structures by
using CFD tools such as FLACS.
START
Data Entry
Release Frequency Leak size/
Geometry
Analysis Location
Dispersion
Simulation
Flammable Volume Cloud Size/ Ignition type/
Exceedance Curve Location location
Explosion
Simulation
Overpressure Overpressure
Exceedance curves on targets
Figure 5.4 Detailed quantitative assessment process.
After the quantitative ERA analysis is finished and an overpressure versus frequency
exceedance curve is drawn, the risk calibration method, ALARP, is used to define the risk
acceptance criteria. The ALARP framework for risk criteria is divided into three regions, as
shown in Figure 5.4.
An unacceptable region: In this region, risks are intolerable except in extraordinary
circumstances, and thus risk reduction measures are essential.
A tolerable region: It is normally known as an ALARP region, which means that the
risks are considered tolerable providing that they have been made as low as reasonably
practicable. In this region, risk reduction measures are desirable but may not be
implemented if a cost-benefit analysis shows that their cost is disproportionate to the
benefit achieved.
101
|
UWA
|
A broadly acceptable region: Risks in this region are tolerable, and no risk reduction
measures are required.
Figure 5.5 Application of the ALARP to the final results of MLERA
Figure 5.5 also shows an example of the ALARP application to the overpressure versus
frequency exceedance curve. As shown in the diagram, if the design strength of the primary
components of the FLNG is equal to the predicted explosion load in the unaccepted zone, risk
reduction measures are required until the design strength proves to be sufficient to resist the
explosion loads. No further reductions are required if the design strength belongs in the
accepted zone. For the ALARP zone, reduction measures should be conducted unless the cost
proves to be disproportionate to the benefit achieved.
5.3 CASE STUDY
For FLNG structures, cylindrical FLNG vessels are currently under consideration in order to
improve hydrodynamic stability. Figure 5.6 shows the geometry of a cylindrical FLNG
platform in FLACS. It can be seen that the cylindrical platform has a smaller area than a
rectangular one, and all highly congested subsections are more focused on board, which may
increase the explosion risks. However, there exists little research about gas explosion risk
analysis for cylindrical platforms. Therefore, in this section, a cylindrical FLNG structure
proposed by Li et al. (2016) is used as the basic model to illustrate the proposed MLERA.
102
|
UWA
|
5.3.1 Qualitative Risk Screening of the Cylindrical FLNG
For the first level of the risk screening process, based on the concepts from API (2006) and
UKOOA (2003), the selected FLNG module is defined as a high-risk platform because it is a
permanently manned and highly congested offshore structure with a large amount of equipment
and inventories. Then, the next level of risk screening analysis is conducted. The conditions of
safety barriers and design, operation, and maintenance philosophies are defined and listed
below.
Safety barriers: As shown in Figure 5.7, safety gaps are applied to every module.
However, due to the lack of information about other safety barriers such as alarms,
detection devices, ESDs, and water deluges on this FLNG model, a medium level of
the condition of the safety barriers on this FLNG platform is assumed.
Design philosophy: This FLNG is a recently designed offshore structure. It is assumed
to have a high level of design philosophy because it is designed under the most recent
design standards.
Operation and maintenance philosophy: The standard of operation and maintenance
philosophy is assumed to be medium, which refers to the average industry standard,
because no FLNG facility has yet been operated throughout the world.
Based on the conditions listed above, this cylindrical FLNG is defined as having a relatively
medium risk, which means that all subsections belonging to category S2 from the second level
of semi-quantitative risk classification require detailed assessment in the third step.
5.3.2 Semi-Quantitative Risk Classification
The subsections of the selected model are defined by 12 modules. Each module is assessed in
this risk classification process by applying the explosion and safety barrier contributors, which
are defined in Tables 5.6 and 5.8. The target area of consequence analysis is the human living
quarters. As shown in Figure 5.8, a cumulative density function diagram can be calculated
based on the final scores from Table 5.9.
Table 5.9 Scores and Weights of Risk Contributors
Subsections 1 2 3 4 5 6 7 8 9 10 11 12
Total score of explosion
93 64 66 66 81 69 101 101 108 108 115 115
contributors
104
|
UWA
|
Total score of safety
30 30 30 30 27 30 27 30 30 30 30 30
barrier contributors
Final score 63 34 36 36 54 39 74 71 78 78 85 85
During the second level of risk classification process, some of the contributors have the same
score for different subsections. For instance, as can be seen in Table 5.9, the final scores of
safety barriers are the same for most of the modules. This happened for two reasons. First, this
second level of risk classification process is still a rough assessment of each module, which
may lead one particular contributor to the same score for all modules. Second, a lack of detailed
information causes this problem. For example, in this case study, module 5 and 7 have less
safety gaps than the other modules based on design drawings of the proposed model. This is
the only difference of safety barriers can be defined and the other scores of barriers for each
module are assumed to be same due to the limitation of data. Therefore, the total scores of
barriers for most of the sub-sections remain the same. Acquiring more detailed information for
the target structure leads to a higher level of accuracy for this classification.
100%
90%
y 80%
t
i
s
n
e 70%
d
e
v
i 60%
t
a
l
u
m 50%
u
C
40%
30%
20%
10%
0%
30 40 50 60 70 80 90
Total weighted score
Figure 5.8 Cumulative density function diagram of subsections.
From the cumulative density function diagram, it can be observed that the S2 category includes
six subsections, which are from modules 7 to 12. Therefore, six subsections require further
detailed assessment.
105
|
UWA
|
5.3.3 Detailed Quantitative Risk Assessment
As a medium risk level is defined for the selected FLNG during the first level of the qualitative
risk screening process, modules 7 to 12, which belong to category S2, require further detailed
assessment in this section. Therefore, a case study of detailed quantitative risk assessment with
FLACS is conducted in this section. However, due to the limitation of computer force, a simple
analysis model that was considered to be sufficient to demonstrate the proposed method was
built and analyzed.
In this model, three leak locations on subsections 7, 9, and 11 were selected for assessment,
and final results of this assessment were obtained achieved by combining the analyses on these
three locations. The three selected locations are shown in Figure 5.9. Other specific
assumptions of this model are described below.
Four leak rates (12 kg/s, 24 kg/s, 48 kg/s, 96 kg/s) are simulated to study the possible
gas volume buildup in the comparison and design of the blast wall configurations.
In the simulations of dispersion leaks and explosion gas clouds, the inventory of the gas
composition inside the cylindrical FLNG platform is summarized in Table 5.10.
In this study, the assessment focuses on the living quarters with a protective blast wall
on the west side. The living quarters are located at the very east side of the FLNG
(Figure 5.7).
Wind speed and wind direction are fixed at +4 m/s from west to east in order to examine
the worst gas dispersion scenarios with such wind conditions.
Leak directions are modelled in both eastern and western directions.
Figure 5.9 Selected leak locations on the cylindrical FLNG platform.
106
|
UWA
|
Table 5.10 Gas Composition for Dispersion and Explosion Study
Component Export gas
Methane 27%
Ethane 33%
Propane 15%
Hexane 19%
CO 6%
2
Dispersion analysis
Based on the assumptions, the overall leak cases used in this chapter are listed in Table 5.11.
The gas monitor region for dispersion analysis covers all the modules on the cylindrical FLNG
platform.
Table 5.11 Various Leak Cases Determined for Dispersion Study
Wind
Wind Leak
Case speed Leak rate (kg/s) Leak orientation
direction position
(m/s)
1 West to east 4 12, 24, 48, 96 West end Along and opposite wind
2 West to east 4 12, 24, 48, 96 Middle Along and opposite wind
3 West to east 4 12, 24, 48, 96 East end Along and opposite wind
Figure 5.10 demonstrates several examples of dispersion simulation outputs for gas releases
with a leak rate of 48 kg/s. Those releases are simulated from both release directions, and leak
locations are set on the ground and in the middle of modules 7, 9, and 11.
(a) Leaks from module 11 with both wind directions
107
|
UWA
|
Explosion analysis
Explosion simulations are performed by using gas cloud data resulting from dispersion
simulations with leak rates of 12 kg/s to 96 kg/s. The gas clouds are situated in four different
locations, covering the entire platform, so that the overall gas explosion consequences for all
modules can be analysed. For all gas clouds, the plan view sizes are all fixed at 10080 m2,
while the heights of the clouds vary, which is consistent with the gas dispersion results obtained
previously. For each gas explosion simulation, the gas cloud is ignited in the ground center of
each module.
It is seen in Figure 5.12 that each gas cloud covers four modules; about 200 monitor points are
homogeneously assigned on the ground to record the overpressures in a gas explosion
simulation. By taking all different gas leak rate scenarios, gas cloud sizes, and locations into
account, more than 3000 VCE overpressures are monitored in this probabilistic study regarding
the gas explosion simulations. Since a major interest of this study is to assess the condition of
the living quarters, 10 monitor points are assigned near the living quarter to record the
overpressures for each gas explosion scenario.
Figure 5.12 Overview of gas cloud coverage and ignition locations.
As shown in Figure 5.13, three explosion examples are simulated based on different leak rates:
96 kg/s, 48 kg/s, and 24 kg/s. The explosive gas clouds are set at the north and east ends of the
model. The ignition is in the center of the gas cloud located in the east and north ends of the
platform. The gas explosion blast is seen spreading from the ignition center to all surrounding
objects, and the maximum overpressures are observed in the congested region near the edge of
the gas cloud.
109
|
UWA
|
In order to consider the influence of blast walls, a blast wall is modelled in front of the west
end of the living quarter and two monitors are set at both sides of the blast wall. A large
overpressure of approximately 1.8 bar can be observed at the left side of the wall under the leak
rate of 96kg/s as shown in Figure 5.14. However, after the overpressure reduction by the blast
wall, the overpressure at right side of the wall is only about 0.2 bar.
In order to consider all gas dispersion output as input in the gas explosion simulations, 120
explosion cases are numerically modelled. The 120 gas explosion cases are corresponding to
former dispersion simulations consisting of 4 leakage rates, 2 leakage directions, 3 gas release
locations and 5 different series of blast wall layout designs (Li, 2015). The overpressure of
each case is calculated by FLACs and the overall cumulative curve of gas explosion
simulations is summarized in Figure 5.15. Equal frequencies are allocated to all monitored
overpressures for the living quarters, which are sorted from small to large.
100%
y
t 80%
i
s
n
e 60%
d
e
v
i 40%
t
a
l
u 20%
m
u
C 0%
0 0.1 0.2 0.3 0.4
Overpressure (bar)
Figure 5.15 Cumulative curve of overpressure for living quarters.
Frequency analysis.
A simple illustrative explosion frequency calculation is performed in this section. The
exceedance curve of frequency against overpressure at the living quarters is formed by using
the monitored overpressures over 1000 scenarios.
To simplify the analysis process, the leak frequencies of different leak rates are assumed to be
the same. Based on the data from the Purple Book (Uijt & Ale, 2005), the leak frequency is
taken as 3.33 10-1 per year. Moreover, based on the ignition intensities and the previously
performed dispersion simulations, the ignition probability is determined to be 0.36%. The
111
|
UWA
|
explosion frequency is consequently calculated by multiplying the leak frequency and ignition
probability. Therefore, the total explosion frequency is approximated at 1.2 10-3 per year.
Consequently, the explosion risk regarding the living quarters subjected to overpressures of
VCE from the liquefaction modules is evaluated, and the probability of exceedance curves with
a frequency of 10-4 /year is shown in Figure 5.16.
10−3
1.0E-3
)
r
a
e
y
/(
y
c
n
101−.04E-4
e
u
q
e
r
f
e
c
n
a
101− .05
E-5
d
e
e
c
x
E
101− .06
E-6
0 0.1 0.2 0.3 0.4
Overpressure (barg)
Figure 5.16 Exceedance curve of overpressures around the living quarters for all leak
rate scenarios.
From Figure 5.16, it can be seen that the maximum overpressure is about 0.4 bar in the living
quarters. With the application of the ALARP concept, the accepted zone starts from 0.2 bar,
which means that no further risk reduction method is required if the maximum strength of the
primary components of the FLNG is designed to be larger than 0.2 bar, which corresponds to
a frequency of 10-5. Otherwise, risk reduction measures are required until the design strength
proves to be at least greater than 0.05 bar (corresponding to 10-4) and also proves to be as low
as reasonably practical.
5.4 SUMMARY
In conclusion, a more efficient multi-level explosion risk analysis method (MLERA) is
proposed in this chapter. This method includes three levels of assessment, which are qualitative
risk screening for an FLNG facility at the first level, semi-quantitative risk classification for
112
|
UWA
|
sub-sections at the second level, and quantitative risk calculation for the target area with the
highest potential risks at the third level.
Since the current design standards for normal offshore platforms are not sufficient for assessing
explosion risks of super-large offshore structures, during the risk screening and risk
classification processes, safety barriers are used as extra risk indicators beyond the traditional
ones such as congestion, confinement, ventilation, etc. As mentioned, with only traditional
standards, FLNG platforms will always be defined as high risk. However, with the extra
contributors of safety barriers, the target FLNG facility is able to be defined as having relatively
low, medium, or high risks, which provides a possibility for further assessments.
For detailed quantitative risk assessment, a CFD software, FLACS, is used to model and
analyse the target FLNG platform. The results are shown as an exceedance curve, which
describes the possibilities of overpressure at the target area. Then, an ALARP method is
selected as a calibration tool to decide if the explosion loads from the exceedance curve can be
accepted or not. If the overpressure exceeds the acceptable limitation, it is necessary to install
safety barriers, and further assessments are required until the final results show that the risk is
reduced to an acceptable level and as low as is reasonably practical.
Throughout the three levels of risk assessment, the areas with the highest level of potential
risks are assigned to be assessed first and to decide if further assessment is necessary or not.
From the case study, it can be seen that only half of the subsections on the selected model
require detailed assessment by using FLACs if the analysis focuses on the living quarters,
which means that a large amount of calculation time is saved.
113
|
UWA
|
CHAPTER 6. CONFIDENCE-BASED QUANTITATIVE RISK ANALYSIS FOR
OFFSHORE ACCIDENTAL HYDROCARBON RELEASE EVENTS
6.1 INTRODUCTION
In this chapter, in order to enable a more reliable risk evaluation, a confidence-based
quantitative risk analysis method is developed by implementing fuzzy set theory into traditional
event tree analysis. Hydrocarbon release-related risks will be the focus of this study because
hydrocarbon release plays a critical role in explosion accident risks of process facilities. To
evaluate the offshore hydrocarbon release risk, a barrier and operational risk analysis (BORA)
method (Aven et al., 2006) has been proved to be one of the most applicable and practicable
form of QRA in the offshore oil and gas industry. Therefore, the BORA method is selected to
be the basic model to demonstrate the confidence level-based method.
In order to assess the risks of offshore facilities, several methods have been widely used during
the last few decades such as hazard and operability study (HAZOP) (Kletz, 1999), preliminary
hazard analysis (PHA) (Vincoli, 2006), and failure mode and effect analysis (FEMA) (Stamatis,
2003). The concept of quantitative risk analysis (QRA) has also been increasingly widely used
to evaluate the risks in the offshore oil and gas industry. QRA is a quantitative assessment
methodology to evaluate the risks of hazardous activities systematically in order to assist the
decision-making process (Spouge, 1999). The world’s first requirement for offshore QRA was
issued by the Norwegian Petroleum Directorate (NPD) according to its “Guidelines for Safety
Evaluation of Platform Conceptual Design” in 1981 (Brandsater, 2002). After 30 years of
development, QRA has become one of the most important techniques for identifying major
offshore accident risks in accordance with worldwide regulations. For instance, under the UK
safety case regulations, QRA is one of the main methods for showing that the risks are as low
as reasonably practicable (HSE, 2006).
However, during the quantitative analysis process, uncertainties form some of the main
limitations of QRA. The uncertainties mainly come from two aspects for offshore QRA
(Spouge, 1999). First, as QRA is a relatively new technique, a large variation in study quality
will occur due to the lack of agreed approaches and poor availability of data. Second, although
QRA is assumed to be objective, subjective judgments are often involved in offshore risk
assessments due to the complex circumstances of oil and gas platforms. These subjective
judgments based on experts’ experience may lead to inaccurate risk estimates. In addition, the
114
|
UWA
|
extent of simplification made in the modelling of risks may also cause uncertainties (Vinnem,
2007).
Three of the most common approaches for representing and reasoning with uncertainties are
Monte-Carlo simulation (Vose, 1996), Bayesian probability theory (Bernardo, & Smith, 2009),
and fuzzy set theory (Zadeh, 1965). In this study, the uncertainties from subjective judgments
will be the main focus. Thus, the fuzzy set theory is assumed to be a proper choice due to its
suitability for decision-making with estimated values or experience-based judgments according
to imprecise information (Liu, et al., 2003). Therefore, a fuzzy set theory-based confidence
level method is proposed to deal with the uncertainties in accordance with experts’ subjective
judgments by incorporating confidence levels into the traditional QRA framework.
Since it is unrealistic to estimate the frequency of an accidental risk precisely using one definite
probability when safety experts are uncertain about the accuracy of their risk evaluation due to
uncertainties, it is assumed that the proposed confidence level method may be beneficial for
mitigating the influence of uncertainties and improving the reliability of QRA. Compared to
previous methods, this proposed method focuses on subjective judgments and divides the
expert’s confidence into five levels by introducing a new form of fuzzy member function. This
new L-R bell-shaped fuzzy number can be pictured as a group of modified fuzzy membership
curves that represent different confidence levels of the experience-based judgments.
Several existing methods take fuzzy set theory into consideration for conventional decision-
making and reasoning methods. Huang et al. (2001) provided a formal procedure for the
application of fuzzy theories to evaluate human errors and integrate them into event tree
analysis. Cho et al. (2002) introduced new forms of fuzzy membership curves in order to
represent the degree of uncertainties involved in both probabilistic parameter estimates and
subjective judgments. Dong & Yu (2005) used fuzzy fault tree analysis to assess the failure of
oil and gas transmission pipelines and a weighting factor was introduced to represent experts’
elicitations based on their different backgrounds of experience and knowledge. With regard to
the application of fuzzy concepts to the risk analysis of the oil and gas industry, Markowski et
al. (2009) developed a fuzzy set theory-based “bow-tie” model for process safety analysis (PSA)
to deal with the uncertainties of information shortages and obtain more realistically determined
results. Wang et al. (2011) proposed a hybrid causal logic model to assess the fire risks on an
offshore oil production facility by mapping a fuzzy fault tree into a Bayesian network. Recently,
115
|
UWA
|
Sa’idi et al. (2014) proposed a fuzzy risk-based maintenance (RBM) method for risk modelling
of process operations in oil and gas refineries. This study showed that the results of the fuzzy
model were more precisely determined in comparison to the traditional RBM model.
Rajakarunakaran et al. (2015) presented a fuzzy logic-based method for the reliability analysis
of a liquid petroleum gas (LPG) refueling station in order to model inaccuracy and uncertainty
when quantitative historical failure data is scarce or unavailable.
6.2 CONFIDENCE LEVEL-BASED BORA-RELEASE METHOD
6.2.1 Brief introduction of the BORA method
The BORA-Release method has been proposed to analyse the hydrocarbon release risks of
offshore structures from a set of hydrocarbon release scenarios based on the combined used of
event trees, barrier block diagrams, fault trees, and risk influence diagrams (Seljelid et al.,
2007). To conduct the BORA method, Aven, Skelt, & Vinnem (2006) described the process
using eight steps: (1) developing a basic risk model; (2) modelling the performance of barrier
functions; (3) assigning the industry average frequencies/probabilities to the initiating events
and basic events; (4) developing risk influence diagrams; (5) scoring risk influence factors
(RIFs); (6) weighting RIFs; (7) adjusting industry average frequencies/probabilities; and (8)
determining the platform-specific risk by recalculating the risk.
In comparison with the normal QRA method, the BORA-Release method allows risk analysis
experts to describe the specific conditions of offshore platforms from technical, human, and
operational, as well as organisational RIFs. The performance of the initial events and barriers
will be affected by the RIFs. Based on the evaluation of RIFs, a relatively more realistic
frequency/probability can be achieved because the platform specific conditions are considered.
However, there exist some uncertainties during the analysis of the BORA method. First,
uncertainties are unavoidable during the scoring and weighting process of RIFs because the
process is conducted mainly based on subjective judgments of risk analysis experts according
to their previous experience. Second, Sklet, Vinnem, & Aven (2006) pointed out that the
validity of the RIF scoring was evaluated to be low due to the limitation of the scoring methods.
Third, the imprecision and lack of data is another problem that increases the uncertainties of
the experts’ evaluation.
116
|
UWA
|
6.2.2 Application of the confidence level method to the BORA method
It is illustrated in this study that a confidence level-based methodology can be effectively used
to incorporate the uncertainties into the QRA model. A simple illustrative schematic capturing
the framework that needs to be followed in the implementation of the proposed method is
depicted in Figure 6.1.
Probability of Revised Probability of Define
the lower average the higher confidence
bound probability bound level
Value of
Confidence
triplet A
factor, n
a ,a ,a
1 2 3
Bell-shaped
fuzzy
membership
function
Define
α-cut
optimism
calculation
level
Total integral
defuzzification
Final
probability
Figure 6.1 Schematic of the proposed confidence level method framework.
As mentioned in Section 2.1, since the RIF scoring and weighting process of the BORA method
highly depends on the expert’s subjective judgments, the result may contain many uncertainties
if the data is insufficient or the scoring method is inappropriate. Thus, the proposed method
provides the experts with a measurement of their confidence levels to assist them in defining
the probability of hydrocarbon release accidents more accurately. The application of the
confidence level to the BORA model contains the following main steps:
117
|
UWA
|
Analysis using an L-R bell-shaped fuzzy number.
First, the adjusted results from the BORA method need to be applied to an L-R bell-shaped
fuzzy number, which can be pictured as a group of modified fuzzy membership curves to
represent different confidence levels of the experience-based judgments. The fuzzy number is
defined by a triplet 𝐴̃ = (𝑎 ,𝑎 ,𝑎 ) and the membership function is shown in Equation (6.1).
1 2 3
0 𝑓𝑜𝑟 𝑥 < 𝑎
1
𝑒𝑏( 𝑎𝑎 22 −− 𝑎𝑥 1)𝑛
𝑓𝑜𝑟 𝑎
1
≤ 𝑥 < 𝑎
2
𝜇 (𝑥) = 1 𝑓𝑜𝑟 𝑥 = 𝑎 (6.1)
𝐴̃ 2
𝑥−𝑎 𝑛
𝑏( 2)
𝑒 𝑎3−𝑎2 𝑓𝑜𝑟 𝑎
2
< 𝑥 ≤ 𝑎
3
{ 0 𝑓𝑜𝑟 𝑥 > 𝑎 }
3
where 𝑎 is the center of a fuzzy membership curve, which represents the expert judgment
2
value; 𝑎 and 𝑎 represent the values of the upper and lower bounds; 𝑛 is the confidence factor,
1 3
and 𝑏 is a boundary index used to control the boundary of the membership function in order to
ensure the membership is smaller than or equal to ∆𝛼 when 𝑥 = 𝑎 𝑜𝑟 𝑎 . To achieve this, the
1 3
boundary factor 𝑏 needs to equal or be smaller than 𝑙𝑛∆𝛼. An example of the bell-shaped curve
is depicted in Figure 6.2.
1.2
a
2
1
p0.8
i
h
s
r
e0.6
b
m
e
M
0.4
0.2
a
a 3
1
0
0 0.05 0.1 0.15 0.2
Probability
Figure 6.2 Example curve of a bell-shaped fuzzy number.
118
|
UWA
|
Table 6.1 Category of confidence levels
Confidence level Description Confidence factor
1 Very confident 0.1
2 Confident 0.5
3 Neutral 1
4 Unconfident 2
5 Very unconfident 3
1.20E+00
1.00E+00
n=0.5
8.00E-01
n=1
p
i
h n=2
s
r 6.00E-01
e
b n=3
m
e
M 4.00E-01
2.00E-01
0.00E+00
0 0.05 0.1 0.15 0.2
Probobility
Figure 6.4 L-R bell-shaped fuzzy number curves with different confidence factors.
In general, safety experts can define the confidence level of judgments based on the degree of
uncertainties from four aspects (Cho et al., 2002): (1) the complexity of the judgmental
condition; (2) the level of education, assurance, and experience; (3) the condition of data
(sufficient/insufficient/none); (4) the standard of the analysis method, and the higher the degree
of uncertainties, the lower the confidence level.
Deciding the degree of optimism and defuzzifying the final fuzzy number
In order to match the 𝛼-cut operations and to acquire complete information, the defuzzification
method with a total integral value (Liou & Wang, 1992) is chosen and a factor, 𝛿, of the
optimism levels is used to represent the attitude of the decision-maker. Thus, for the L-R bell-
shaped fuzzy number, the total defuzzified integral value will be:
121
|
UWA
|
𝐼𝛿(𝐴̃) = (1−𝛿)𝐼 (𝐴̃)+𝛿𝐼 (𝐴̃) 𝛿 ∈ [0,1] (6.6)
𝑇 𝐿 𝑅
1
𝐼 (𝐴̃) = ∑𝛼 (𝐴̃)∆𝛼 (6.7)
𝐿 𝑖
𝛼=0
1
𝐼 (𝐴̃) = ∑𝛼 (𝐴̃)∆𝛼 (6.8)
𝑅 𝑖
𝛼=0
where 𝐼 (𝐴̃) and 𝐼 (𝐴̃) are the left and right integral values of 𝐴̃ respectively; 𝛿 is the
𝐿 𝑅
optimism factor; and 𝐼𝛿(𝐴̃) is the total integral value with the influence of 𝛿.
𝑇
When 𝛿 = 0, the total integral value represents the optimistic viewpoint of the decision-maker.
Alternatively, for a pessimistic or moderate decision-maker, 𝛿 equals 1 or 0.5 respectively.
The degree of optimism is also classified into five categories and the values of the optimism
factor, 𝛿, are given in Table 6.2. After determining the attitude, the safety engineers/managers
are able to find an appropriate probability of hydrocarbon release risks for their offshore
facilities. Finally, the total integral method is used to defuzzify the final result of the fuzzy
analysis and to apply the final probability to the BORA risk model.
Table 6.2 Category of optimism factors
Optimism level Description Optimism factor 𝛿
A Very optimistic 0.1
B Optimistic 0.3
C Neutral 0.5
D Pessimistic 0.7
E Very pessimistic 0.9
6.2.3 Assumptions for practical implementation of the proposed method
For the practical implementation of the proposed method, the following assumptions need to
be noted:
The proposed method is assumed to be useful for dealing with uncertainties that are
related to subjective judgments.
The proposed fuzzy membership function is assumed to be formed by a triplet ̃𝐴 =
(𝑎 ,𝑎 ,𝑎 ). Therefore, for the practical implementation of the proposed method, the
1 2 3
experts may be required to define the probabilities for the lower and higher boundaries.
122
|
UWA
|
The confidence and optimism levels in this study are assumed to be divided into five
levels. However, the number of the levels and the specific values of confidence and
optimism factors can be decided by the specific conditions of real projects.
The memberships of two bounds, 𝑎 and 𝑎 , are assumed to be 0. Therefore, to achieve
1 3
this assumption, it is suggested that ∆𝛼 should equal or be smaller than 0.01 in the
defuzzification process in order to make the membership of both bounds close to 0.
6.3 CASE STUDY
6.3.1 Application of the proposed method to the BORA model - Scenario B
For a better understanding of the application using the suggested procedure, a typical case study
is described here as an illustration based on the case study of risk scenario B from the work of
Sklet et al., (2006). The barrier block diagram of risk scenario B is shown in Figure 6.5 and all
relevant results for scenario B (Sklet et al., 2006) are listed in Table 6.3.
Figure 6.5 Scenario B: Release due to incorrect fitting of flanges during maintenance.
123
|
UWA
|
Table 6.3 Scenario B: Industry probabilities/frequencies from the BORA method
Average Lower Higher Revised
Description Event Probability: Bound: Bound: Probability:
𝑃 𝑃 𝑃 𝑃
𝑎𝑣𝑒 𝑙𝑜𝑤 ℎ𝑖𝑔ℎ 𝑟𝑒𝑣
Frequency of incorrect
fitting of flanges or bolts 𝑓(𝐵0)𝑎 0.84 0.168 4.2 1.064
after inspection per year
Probability of failure to
reveal failure by self- 𝑃 (𝐵1)𝑏 0.34 0.069 0.69 0.37
𝐹𝑎𝑖𝑙𝑢𝑟𝑒
control
Probability of failure to
reveal failure by third 𝑃 (𝐵2)𝑐 0.11 0.022 0.55 0.15
𝐹𝑎𝑖𝑙𝑢𝑟𝑒
party control
Probability of failure to
𝑃 (𝐵3)𝑑 0.04 0.008 0.2 0.066
detect release by leak test 𝐹𝑎𝑖𝑙𝑢𝑟𝑒
Total release frequency
𝑓𝑒 0.0012 2.04E-6 0.32 0.0038
from scenario B per year 𝐵𝑡𝑜𝑡𝑎𝑙
Figure 6.5 describes the basic event tree model for the incorrect fitting of flanges during
maintenance. There are three safety barriers in this scenario and the relationship between the
initiating event and all barriers are in series. Thus, the only arithmetic calculation required in
this event tree analysis is multiplication. From Table 6.3, it can be observed that the frequency
of the higher bound (0.32) is approximately 100 times larger than the revised average frequency
(0.0039) after the multiplication calculations, and this huge difference represents the range of
uncertainties.
Table 6.4 shows that the values of 𝑃 , 𝑃 , and 𝑃 for the initiating event B0 and safety
𝑙𝑜𝑤 𝑟𝑒𝑣 ℎ𝑖𝑔ℎ
barriers B1, B2, and B3 are applied to triplets 𝐵̃ = (𝑏 ,𝑏 ,𝑏 ). Equation (6.1) and the 𝛼-cut
1 2 3
operations are used to conduct the analysis of the fuzzy numbers with ∆𝛼 = 0.01 and 𝑏 = −7.
After defuzzification, the final modified probabilities based on confidence levels and optimism
levels will be acquired using Equation (6.6).
Table 6.4 Values for triplets of initiating event and basic events
Triplets Value
Events
𝒃 𝒃 𝒃
𝟏 𝟐 𝟑
𝑩̃(𝑩𝟎) 0.168 1.064 4.2
𝑩̃(𝑩𝟏) 0.069 0.37 0.69
𝑩̃(𝑩𝟐) 0.022 0.15 0.55
𝑩̃(𝑩𝟑) 0.008 0.066 0.2
124
|
UWA
|
Table 6.5 represents the final modified industry probabilities after defuzzification according to
five confidence levels with a moderate attitude only. As can be observed, the modified
probability with the highest confidence level equals the revised frequency of the BORA method,
and as the confidence level decreases, the probability of hydrocarbon release increases.
However, since there are four components that require adjustments in the event tree analysis
of scenario B, the deviations among the modified frequencies by different confidence levels
become very large. As shown in Table 6.5, there are about 10 times the differences from
confidence level 1 (0.0039) to level 5 (0.034).
Table 6.5 Modified industry frequencies based on confidence levels only
Confidence level 1 2 3 4 5
Modified industry probabilities/frequencies 0.0039 0.0048 0.0085 0.021 0.034
6.3.2 Results discussion
From Table 6.3, it can be concluded that the BORA method considers the platform-specific
operational factors and quantifies those operational factors to modify the industrial average
value. However, according to the degree of data adequacy and the RIF scoring method standard,
the revised industrial frequencies may have certain amounts of uncertainties. Those
uncertainties will affect the result significantly when the higher bound of the industrial
probabilities are observed to be much higher than the revised frequencies, for example, 100
times in scenario B (0.32 for the higher bound and 0.0038 for the revised value). Therefore, it
is unrealistic to determine the likelihood of an initial event or the failure of barrier functions
using one definite value.
From Table 6.5, it can be observed that the confidence level divided revised industrial average
probabilities into five groups from confident level 1 to 5. With the highest confidence level,
the experts can obtain the same probability acquired from the BORA method. However, for the
lower level of confidence, more uncertainties are considered, which causes a higher frequency
of hydrocarbon release risks. Compared to the risk difference calculated by the BORA method
between the revised industrial average probability (0.0039) and the higher bound probability
(0.32), the difference has been decreased dramatically by defining the probability with the
lowest confidence level (0.034), as shown in the results. This difference reduction mitigates
the influence of uncertainties by 10 times for the situation with insufficient experience and data.
126
|
UWA
|
Based on the five levels of confidence classification, the proposed method quantifies the
uncertainties into five ranges, and then assists the experts to estimate the risks more realistically
based on their specific confidence levels.
6.3.3 Sensitivity analysis of the optimism factor
Sensitivity analysis is performed for the optimism factor to observe the influence of different
attitudes on the final results of risk evaluations. Table 6.6 shows the results with different
optimism factors. It can be observed that the influence of the optimism factor has more effect
on the final frequencies when the confidence level is lower. For instance, the industrial
frequency remains the same when experts have the highest confidence level, while differences
of 0.0021, 0.0093, 0.031, and 0.053 occur when the confidence levels equal 2, 3, 4, and 5
respectively.
Table 6.6 Final modified industry frequencies based on both confidence and optimism
levels
Optimism Confidence Level
Level 1 2 3 4 5
A 0.0039 0.0037 0.0037 0.0051 0.0073
B 0.0039 0.0043 0.0061 0.013 0.0204
C 0.0039 0.0048 0.0085 0.021 0.034
D 0.0039 0.0053 0.011 0.028 0.047
E 0.0039 0.0058 0.013 0.036 0.060
Therefore, it can be concluded that the attitude of safety experts will have more effect on the
final risk estimates when the confidence level is lower. For example, in the risk analysis of
scenario B, if the experts are very confident about their risk assessments, it is unnecessary for
them to decide the optimism factor because the final result will remain the same, at 0.0039.
However, when the experts are very unconfident, the modified probability may change to 0.047
rather than 0.034 from the lowest confidence level in Table 6.6 if a conservative design is
favored, or a lower probability (0.0204) is also probable if it is evaluated that the platform
condition and operational standard are underestimated.
127
|
UWA
|
6.4 SUMMARY
In conclusion, QRA plays an increasingly important role in offshore risk analysis. However,
the accuracy and validity of QRA are significantly affected by uncertainties. This chapter
proposed a new methodology for incorporating uncertainties using confidence levels into
conventional QRA. It also introduced a new form of the bell-shaped fuzzy number, designed
to consider the degree of uncertainties that are represented by the concept of confidence levels,
since it is unrealistic to define the probability of an event by one single explicit value.
As mentioned in the results discussion, it can be seen that the influence of the uncertainties has
been reduced by approximately 10 times. Therefore, based on the results from the case study,
it is concluded that the proposed confidence level methodology can be very helpful for offshore
safety experts to improve the validity of risk evaluations by reducing the impact of subjective
judgment-related uncertainties. In addition, it also provides a useful tool for safety experts to
make more realistic and accurate risk estimates based on the confidence level evaluations of
their risk assessments.
When large-scale process systems are considered, the final risk may increase significantly if
the experts are very unconfident about their risk assessments at every step or most steps of the
large-scale process. Otherwise, if they are only unconfident about a few steps of the whole
process, the final risk estimates should not show large difference from the initial results.
However, whichever condition occurs, the proposed method should be able to provide a more
realistic result because the differences are dominated by the uncertainties and larger quantities
of uncertainties cause larger differences.
128
|
UWA
|
CHAPTER 7. CONCLUSION REMARKS
7.1 MAIN FINDINGS
The thesis carries out a comprehensive study on the risk analysis of explosion accidents at oil
and gas facilities. An advanced Bayesian network–based quantitative explosion risk analysis
method is proposed to model risks from initial release to consequent explosions and human
losses because of the ability of the Bayesian network to reveal complicated mechanisms with
complex interrelationships between explosion risk factors. Another major concern of the
present study is gas explosion risk analysis for process facilities close to residential areas,
which may lead to catastrophic consequences. A grid-based risk-mapping method is developed
to enable efficient and reliable explosion risk analysis for large areas under complicated
circumstances. A multi-level explosion risk analysis method (MLERA) is developed for
assessing explosion risks of super-large oil and gas facilities with highly congested
environments. Finally, a confidence level–based approach is proposed for incorporating
uncertainties into conventional risk analysis using the concept of fuzzy theory.
A more accurate BN-based quantitative risk analysis method is developed for explosion and
fire accidents at petrol stations. Two case studies show that petrol leaks may lead to human
loss such as death or injury with a probability of approximately 3% and 2% when explosion
and fire occurs, respectively. The case studies prove that the BN can deal with complicated
interrelationships and provide a more accurate risk analysis. Sensitivity studies are conducted
in this research, and the results indicate that smoking is the most dangerous ignition source,
overfill is the most probable cause of release while hose rupture may cause the most
catastrophic consequences, refueling jobs are better conducted at nighttime, and tanker
explosions and fires at petrol stations increase human losses.
A more detailed grid-based risk-mapping method is established for explosion events. This
method uses a Bayesian network (BN) as a risk analysis tool to estimate multiple consequences
and related probabilities for each grid. Based on the results of all the grids, 3-D bar charts are
formed to describe the risks of explosion loads, building damage, and human loss. A case study
is conducted to demonstrate the applicability of the proposed method. From the case study, it
can be concluded that the method provides a more detailed risk analysis of a large site with
complex conditions. Meanwhile, the results of 3-D risk-mapping charts offer a clear view of
potential risks, which is useful for risk and safety management during the planning,
129
|
UWA
|
construction, and operation stages. A mesh convergence study was also conducted, and a grid
size of 50 × 50 m was found to be most appropriate over a domain range of 2 × 2 km.
A more efficient multi-level explosion risk analysis method (MLERA) is developed for
massive oil and gas facilities. This method includes three levels of assessment, which are
qualitative risk screening for an FLNG facility at the first level, semi-quantitative risk
classification for subsections at the second level, and quantitative risk calculation for the target
area with the highest potential risks at the third level. Throughout the three levels of risk
assessment, the areas with the highest level of potential risks are assigned to be assessed first
and to decide if further assessment is necessary or not. From the case study, only half of the
subsections on the selected model require detailed assessment by using FLACs if the analysis
focuses on the living quarters, which means that a large amount of calculation time is saved.
A more reliable risk analysis method for incorporating uncertainties using confidence levels
into conventional QRA is proposed. It also introduced a new form of the bell-shaped fuzzy
number, designed to consider the degree of uncertainties that are represented by the concept of
confidence levels. From the case study, the influence of the uncertainties has been reduced by
approximately 10 times. Therefore, the proposed confidence level methodology can be very
helpful for safety experts to improve the reliability and validity of risk evaluations by reducing
the impact of subjective judgment–related uncertainties.
7.2 RECOMMENDATIONS FOR FUTURE WORK
Further investigation could be made in future studies as follows.
First, the application of BN modelling can be widely extended to other scenarios of risk
analysis, and if required, more consequences such as environmental concerns and business
losses, or more risk factors such as human factors and safety barriers, can also be easily added
to the current BN to enable a more detailed explosion risk assessment. Meanwhile, the
explosion risk assessment method proposed in Chapter 3 can be validated if detailed
information and environments of oil gas facilities and damage incurred in explosion accidents
become available. In addition, BN modelling can be further improved by reducing the
uncertainties which may significantly affect the reliability of BN. Therefore, future studies will
focus on the methods that can mitigate uncertainties caused by data shortage, subjective
judgments, and modeling errors and, consequently, improve accuracy and reliability of BN.
130
|
UWA
|
Second, for explosion risk analysis of large oil and gas facilities with complex structures and
environments, it is suggested that a grid-based risk analysis method provide a more detailed
and accurate risk analysis compared with traditional QRAs by simplifying the complicated
conditions throughout the gridding process. A computer program is composed to conduct BN
calculations automatically for all the grids to ensure the efficiency of the proposed model.
However, the efficiency of the present method can be further improved by image identification
technologies because the current assessment of building damages and human losses are handled
manually. Meanwhile, since the GIS output of PHAST results can only be depicted as circled
regions, the overpressure at each grid has to be defined manually as well. Therefore, the future
work in this part aims at developing a completely automatic grid-based risk analysis method
by implementing other advanced image-processing technologies.
Third, in terms of the multi-level method, it has been successfully applied to a cylindrical
offshore FLNG platform. However, since the selected platform is newly designed and
information for explosion risk screening and classification is not sufficient, the result of the
current case study is quite conservative; only half of the calculation time can be saved. It can
be further improved when more detailed information is acquired. In addition, it is probable for
the proposed method to be applied to other large offshore or onshore process facilities.
Nevertheless, weights and scores of explosion risk contributors and barriers should be adjusted
based on the specific conditions of real projects.
Furthermore, since the confidence-based method can effectively deal with uncertainties related
to experts’ judgments for the BORA method, it is expected that the proposed methodology also
have the ability to be successfully applied to various types of QRAs with only minor
modifications of specific characteristics for each QRA. However, it needs to be clarified that
the confidence level factors presented are only an illustrative example. It barely considers any
practical conditions of actual offshore hydrocarbon release risk analysis. Thus, future work will
focus on the establishment of a detailed confidence level category in accordance with real
situations for hydrocarbon release QRA to provide the risk analyst with a more applicable and
operable methodology.
Meanwhile, other than the possible future improvements for the works done in the current
thesis, some more big issues and challenges in the related research area are also discussed here.
131
|
UWA
|
First, in order to enable a reliable risk analysis, accurate prediction of gas explosion
overpressures is very important. However, the current numerical methods used to simulate gas
explosions is not able to reach a high level of accuracy. Empirical models are simple ones
proposed based on experimental results and can barely consider the influence created by the
nearby structures. CFD methods are more advance than empirical method. A wide range of
geometrical arrangements and conditions in the gas cloud can be considered by discretizing the
solution domain in both space and time. However, CFD methods are not fully developed up to
now. For example, the far field transportation of fluid flow caused by the initial gas explosion
cannot be calculated. Therefore, the further development of numerical method may improve
the explosion risk analysis significantly.
Second, data is a big issue for the current study of risk analysis. As the gas explosion events
are rare to occur compared to other extreme events such as fire or earthquake, the data which
can be collected is already insufficient at the first stage. Meanwhile, records of previously
occurred explosion accidents can be hardly to find. During my PhD study of gas explosion risk
analysis, data shortage is always one for the most critical issues. Therefore, developing a data
source of explosion events which can be fully accessed by the public can be a very meaningful
work for future risk analysis.
Third, the current risk analysis of gas explosions requires high level of practical experience.
As the explosion procedure of oil and gas facilities are quite complicated, the familiarity with
the oil and gas structures and equipment is critical to a reliable gas explosion risk analysis.
However, due to the fast development of oil and gas facilities, there may not be enough
experienced engineers or technicians to conduct risk analysis for all of the facilities. Therefore,
developing risk analysis methods which can be easily understood and applied will be another
important work in the future study.
132
|
UWA
|
Department of Consumer and Employment Protection (2005), Dangerous Good Incidents
Logs 2005. Government of Western Australia, Australia.
Department of Consumer and Employment Protection (2006), Dangerous Good Incidents
Logs 2006. Government of Western Australia, Australia.
Department of Consumer and Employment Protection (2007), Dangerous Good Incidents
Logs 2007. Government of Western Australia, Australia.
Department of Minerals and Energy (1996), Explosives and Dangerous Goods Act 1961
Summary of Accident Reports 1996. Government of Western Australia, Australia.
Department of Minerals and Energy (1997), Explosives and Dangerous Goods Act 1961
Summary of Accident Reports 1997. Government of Western Australia, Australia.
Department of Minerals and Energy (1998), Explosives and Dangerous Goods Act 1961
Summary of Accident Reports 1998. Government of Western Australia, Australia.
Department of Minerals and Energy (1999), Explosives and Dangerous Goods Act 1961
Summary of Accident Reports 1999. Government of Western Australia, Australia.
Department of Minerals and Energy (2000), Explosives and Dangerous Goods Act 1961
Summary of Accident Reports 2000. Government of Western Australia, Australia.
Department of Mines and Petroleum (2008), Overview of dangerous goods incident reports
2008, Government of Western Australia, Australia.
Department of Mines and Petroleum (2009a), Overview of dangerous goods incident reports
2009, Government of Western Australia, Australia.
Department of Mines and Petroleum (2009b), Fuel tanker fire at Maddington 15 May 2009,
Government of Western Australia, Australia.
Department of Mines and Petroleum (2010), Overview of dangerous goods incident reports
2010, Government of Western Australia, Australia.
Department of Mines and Petroleum (2011), Overview of dangerous goods reportable
situations and incidents 2011. Government of Western Australia, Australia.
Department of Mines and Petroleum (2012), Overview of dangerous goods reportable
situations and incidents 2012. Government of Western Australia, Australia.
Department of Mines and Petroleum (2013), Overview of dangerous goods reportable
situations and incidents 2013. Government of Western Australia, Australia.
Department of Mines and Petroleum (2014), Overview of dangerous goods reportable
situations and incidents 2014. Government of Western Australia, Australia.
136
|
UWA
|
Thesis declaration
I, William Robert Johnson, certify that:
• This thesis has been substantially accomplished during enrolment in the degree.
• This thesis does not contain material which has been submitted for the award of any other degree
or diploma in my name, in any university or other tertiary institution.
• No part of this work will, in the future, be used in a submission in my name, for any other
degree or diploma in any university or other tertiary institution without the prior approval of
The University of Western Australia and where applicable, any partner institution responsible
for the joint-award of this degree.
• This thesis does not contain any material previously published or written by another person,
except where due reference has been made in the text and, where relevant, in the Authorship
Declaration that follows.
• The work(s) are not in any way a violation or infringement of any copyright, trademark, patent,
or other rights whatsoever of any person.
• The research involving human data reported in this thesis was assessed and approved by The
UniversityofWesternAustraliaHumanResearchEthicsCommittee(exemptionRA/4/1/8415,ap-
proval RA/4/1/2593); Curtin University Human Research Ethics Committee (approval HRE2016-
0472); and Liverpool John Moores University Research Ethics (approval 17/SPS/043).
• This thesis contains published work and/or work prepared for publication, some of which has
been co-authored.
Candidate signature:
William Robert Johnson 14-Oct-2019
Student 21761839
ii
|
UWA
|
LIST OF ABBREVIATIONS
Abbreviation Definition
BTK Biomechanical ToolKit
Caffe Convolutional Architecture for Fast Feature Embedding
CAST Calibrated anatomical systems technique
CIFS Common Internet File-System
CNN Convolutional neural network
CPU Central processing unit
CUDA NVIDIA Compute Unified Device Architecture
cuDNN NVIDIA CUDA Deep Neural Network library
DLT Digital linear transformation
FFT Fast Fourier Transform
FL Flat, sagittal foot orientation
GPS Global positioning system
GPU Graphical processing unit
GRF, F, − / / / Ground reaction forces, 3D components & mean
x y z mean
GRM, M, − / / / Ground reaction moments, 3D components & mean
x y z mean
HDF5 Hierarchical data format version 5
ID Inverse dynamics
IEEE Institute of Electrical and Electronics Engineers
IMU Inertial measurement unit
IR Infrared
IRDS Institutional Research Data Store
ISB International Society of Biomechanics
ISBS International Society of Biomechanics in Sports
IVSLRC ImageNet Large Scale Visual Recognition Challenge
KJM,L/R−,− / / / Knee joint moments, left/right, 3D components & mean
x y z mean
LJMU Liverpool John Moores University
LSTM Long Short-Term Memory
MARG Magnetic angular rate gravity device
MEMS Micro-electro-mechanical systems
MIMU Magnetic and inertial measurement unit
MTU Maximum transmission unit
NAS Network attached storage
xviii
|
UWA
|
Chapter 1
Introducing the virtual force plate
1.1 Introduction
Arguably, one of the strongest criticisms of sport biomechanics is that in-competition measurements
which may be invaluable to a coach and athlete are only available in controlled laboratory environments.
Despite being intrusive to the participant, methods such as optical motion capture have traditionally
prevailed, not only due to the speed and complexity of human movement but also due to the dogma of
participant-specificity [Chiari et al., 2005; Winter, 2009]. Research examining less invasive methods
of deriving kinematics, such as non-optical motion capture (e.g. inertial measurement units, IMU),
has primarily focused on clinical applications comprising movements with fewer degrees of freedom
and limited capture environments. Such research has also published conclusions that are primarily
qualitative in nature and make limited statistical comparison against gold standard marker-based
motion capture systems [Abrams et al., 2014; Biasi et al., 2015; Bolink et al., 2015; Lenar et al., 2013;
Slyper and Hodgins, 2008; Vlasic et al., 2007; Wong et al., 2007]. While development and adoption of
newwearabletechnologieshasexpandedatarapidrate, thecurrentleveloffidelityandresolutionofthe
data provided by such systems renders them relatively unsuitable for on-field advanced biomechanical
analyses [Bolink et al., 2015]. In the complementary domain of computer science, progress has been
accelerating in the realm of artificial intelligence (AI), defined as where the computer program improves
itself over time [Langley, 1986]. A computational sub-branch of AI, machine learning, which uses a
large number of samples to construct layers of non-deterministic hierarchy is deep learning, a technique
which has recently achieved step-change improvement in use-cases from face recognition to self-guided
flight [Abbeel et al., 2010; Krizhevsky et al., 2012; Russakovsky et al., 2015; Taigman et al., 2014].
Today, the general increase in computing power, the miniaturization of components, and the growing
maturity of deep learning offers exciting new opportunities to move biomechanics from the laboratory
to the field. Usefully, the sparse nature of marker-based motion capture data (e.g. limited gray-scale
information) is well suited to a deep learning approach and the historical motion capture archives
of The University of Western Australia (UWA) and its partner organizations present an untapped
opportunity for this purpose [Bartlett, 2006; Krizhevsky et al., 2012; Russakovsky et al., 2015; Wong
et al., 2007].
1
|
UWA
|
1.2. RESEARCH OVERVIEW
1.2 Research overview
Theresearchconsistsoffourcascadingstudies. Studyone, testedtherelationshipbetweenmarker-based
motion capture and ground reaction forces and moments (GRF/M) using conventional statistical
methods. GRF/M was estimated during walking, running and sidestepping trials and compared against
experimental ground truth data collected concurrently from a force plate during the motion trials (i.e.
collected at the same time as the 3D trajectories used as the model input). Study two tested a number
of competing convolutional neural network (CNN) models to investigate which performed the strongest
with the data cohort. In study three, a CNN model was developed to predict knee joint moments that
historically would be calculated using inverse dynamics techniques, thereby increasing the portfolio of
deep learning models which can be used in lieu of labor intensive and time consuming biomechanical
instrumentation and modeling protocols. The final translation study four, changed the input driver to
wearable sensor accelerations, demonstrating the potential to free the analysis from the laboratory,
and highlighting the ultimate on-field application of the research.
The thesis is presented as a series of papers prepared for publication (Table 1.1) as recommended
by the graduate research school at UWA, and with each chapter commencing with an explanatory
preamble to link the research studies. Having been submitted as standalone papers, however, the
reader may encounter inevitable repetition in methods. North American spelling conventions have
been adopted throughout because of the dominance of work published in this region.
This research project is one of the first to report the use of deep learning models in lieu of
biomechanical instrumentation and analyses, while retaining multidimensional fidelity, accuracy, and
validity to ground truth. The application of machine learning to interpret the data stream from motion
capture marker trajectories and accelerations, to estimate associated GRF/M and inverse dynamics,
using a non-invasive and potential on-field approach, is a paradigm shift in the sports biomechanics
domain. The methodological workbench outlined has broad practical application for the monitoring of
human movement outside of traditional laboratory-based settings.
1.3 Research aims & hypotheses
1.3.1 Study one
• Chapter 2: Predicting athlete ground reaction forces and moments from motion capture.
This first investigation seeks to determine whether the interpretation of mass and acceleration
(via motion capture marker data) and force (recorded from a force plate) is complete enough that
PLS can establish a strong relationship between these variables. A target correlation r > 0.80
using conventional linear statistics is the underlying goal for this investigation. Also, that sparse
PLS will return the strongest predictor (motion capture plus mass, sex and height) to output
(GRF/M) response.
2
|
UWA
|
1.4. LIMITATIONS & DELIMITATIONS
• Population. Participants used for training data were limited to a young healthy athletic
population (amateur to professional, male 62.7 %, female 37.3 %, height 1.764 ± 0.100 m, and
mass 73.3 ± 20.4 kg) saved to the UWA motion capture archive over an 18–year period from
2000–2017.
• Marker/sensor location. The aim of the marker (and sensor) placement was to capture
physical movement completely enough for the model to build the relationship between the input
and force plate output within systematic constraints e.g. skin motion artefact. Prototypes were
tested with different numbers and locations of markers before eight were settled upon (studies
one/two/three). For study four, inter-laboratory test data was used as an opportunity to evaluate
the workbench under more adversarial conditions, and in this case, the locations of the five
sensors were determined by the host institution, using its own data capture procedures. The use
of preferred sensor locations is expected to improve model performance.
• Multiple data interpolation procedures. Depending on the study investigation, different
approaches to data representation required multiple combinations of interpolation procedures.
All studies time-normalized input to the stance phase of the walking, running or sidestepping
task (i.e. where contact-driven GRF/M occurs). The fine-tuning process from the architecture
of the selected CNN also required interpolation of frames and marker/sensor locations into 2D
images. In study four, it was theorized that the down-sampling of accelerometer data (to fit the
input data structure) was a likely contributor to a combined smoothing effect most notably from
foot-strike.
The following delimitation conditions were actively imposed on the studies.
• Foot-strike and sidestepping technique. Early prototypes achieved excellent correlations
with no regard for foot-strike technique (heel or forefoot strike), or whether the sidestep was
planned, unplanned, footing crossover or regular. This agnostic approach was maintained,
although study three did estimate sagittal plane foot orientation (heel down, flat, and toe down)
forinformationpurposes, andbecausewithoutshankmarkerlocationsfromwhichtocalculatethe
ankle joint angle, the biomechanically accepted kinematic definitions of heel strike and forefoot
strike were unavailable. More advanced data representation allowed studies three and four to
remove sidestepping crossover trials prior to model training.
• Force plate events. In order to validate the models by comparing calculated and predicted
GRF/M and KJM over standardized definitions of stance phase, it was necessary for the training
and test sets to include gait event information derived from the force plate, thus violating the aim
of on-field operation. This requirement would not extend to real-world use, where it is envisaged
that the output would be predicted from a moving window taken from the continuous stream of
input kinematics. The operation would occur in real-time, delayed only by the first pass, and
assuming the CNN prediction takes less time than the length of the moving window. In the case
of Study four, driven by wearable sensor accelerometers, mounting the IMUs more distally on the
foot segment will enable for foot-strike determination from the acceleration data, an approach
6
|
UWA
|
1.5. LITERATURE REVIEW
1.5 Literature review
1.5.1 Background
The pursuit of understanding the forces acting on a body (kinetics) begins with the measurement of
the rotation and translation of the joints (kinematics) [Bartlett, 2007; Manal and Buchanan, 2002;
Moeslund et al., 2006]. Artists and philosophers have long realized the shortcomings of the human
eye when attempting to understand high-speed motion in nature [Allard et al., 1997]. Even what
we might consider today as simple movement patterns have been subjected to vigorous debate and
misinterpretation and it was not until the late 19th century that technology began to address these
aspirations with the invention of photography, and the ability to record the moving image. This capture
of motion was the enabler for a new science of biomechanics, which sought to unlock the secrets of why
the body moves, via recording how the body moves.
Biomechanics underpins many of the disciplines of sport science (e.g. strength and conditioning,
physiology, and sport psychology), and contributes to movement optimization and performance
enhancement, reducing the risk of injury, and improving rehabilitation outcomes [Bartlett, 2007;
Corazza et al., 2010; Winter, 2009]. As passive optical marker-based technology became the recognized
approach for 3D motion capture, embedded force plates were likewise adopted for the simultaneous
recording of ground reaction forces and moments (GRF/M) [Chiari et al., 2005; Winter, 2009], and
despite the recent commercialization and volume deployment of wearable sensor devices, laboratory-
based systems remain the gold standard for biomechanical analyses.
However, in the last seven years there has been a new revolution, this time in data science, enabled
by improvements in processing capacity and a new scale of data sets. With the potential to build
learning models and virtualize the force plate, this revolution could enable biomechanics to finally
escape the laboratory [LeCun et al., 2015].
Thisliteraturereviewtellsthestoryofeachofthesethreecomponents,themotioncaptureinput,the
force plate output, and the possibility of a machine learning the relationship between these fundamental
biomechanical variables.
1.5.2 History of optical motion capture
In 1836, Wilhelm and Eduard Weber used a mechanical apparatus to record human gait characteristics
in a single plane of motion. Etienne-Jules Marey achieved the same with the wing of a dove in 1868.
EadweardMuybridgewasthencreditedwiththerecordingofmovementusingphotography,firstin1872,
Figure 1.1: Multiple exposures via synchronized shutters [Muybridge, 1887].
8
|
UWA
|
1.5. LITERATURE REVIEW
andsubsequently1878,toproveahorseintrothasallitsfeetoffthegroundatthesametime. In1879,he
wentontodevelopanddemonstratesynchronizedtriggeringofmultiplecameras(whatwerefertotoday
as ‘genlock’) facilitating synchronous capture of a subject from multiple different angles (Figure 1.1).
In 1885, Marey extended Muybridge’s work with the first
documented use of marker-based motion capture and back-
groundsegmentation,usinghigh-contrastclothingandmark-
ers of reflective strips of material and buttons (Figure 1.2)
to produce animations of running avatars. Marey developed
a method of using a mirror to scan successive frames onto
a photographic plate and then to move light-sensitive film
through the back of a camera gun, before the more famous
invention of cinema by the Lumi`ere brothers in 1895 [Allard
et al., 1997; Andriacchi and Alexander, 2000; Braun, 2010;
Chiari et al., 2005; Winter, 2009].
By the 1970s, analog film and television were common-
place but required laborious effort to digitize points of inter-
est. However, progress in digital motion capture techniques
followed the increases in processing speed and storage ca-
pacity of computer systems, and today optical sensors have
frame rates capable of capturing high-speed action and with
direct mapping of high resolutions [Payton and Bartlett,
2008].
Although they remain the recognized gold standard,
marker-based systems are expensive, require expert skills
to setup and operate, and have particular environmental
constraints, all of which have contributed to their dominant
Figure 1.2: High-contrast cloth-
use in clinical and academic research settings rather than ing used for early motion cap-
appliedinthefield[Bartlett,2007;Bolinketal.,2015;Chiari ture [Braun, 1994]. “Demeny dressed
in black in preparation for geometric
et al., 2005; Elliott and Alderson, 2007; Wong et al., 2007].
chronophotography, costume of 1884, Al-
Two major problems are frequently reported, difficulty in
bum A, plate 12, Beaune.”
placing markers and the error introduced by soft tissue
artefact. Identification and application of markers require anatomical knowledge and is prone to human
error, even when strict guidelines are in place [Besier et al., 2003; Gorton et al., 2009; Leardini et al.,
2005]. The movement of skin and soft tissue between the marker and the underlying skeleton creates
noisethatisdifficulttofilterbecauseofthesimilaritywiththetargetkinematicmeasurements[Leardini
et al., 2005; Mu¨ndermann et al., 2006]. Techniques have been developed to try and minimize marker
location and associated skin artefact error, including the calibrated anatomical systems technique
(CAST) which improves the identification of anatomical landmarks by using a pointer approach [Besier
et al., 2003; Cappozzo et al., 1995; Elliott and Alderson, 2007; Leardini et al., 2005].
Modern optical motion capture methods still impose many limitations on the local environment for
optimal system performance (Table 1.2). How close this artificial setup comes to reproducing game
9
|
UWA
|
1.5. LITERATURE REVIEW
Table 1.2: Comparison of motion capture use-cases.
Use-case(taskcondition)
Motioncapturemethod Laboratory Reduced In- Source
fieldofplay competition kinematic
fidelity
Opticalmarker,passiveretro-reflective Yes Yes1 No High
Opticalmarkerless Yes Yes Yes Medium
Inertial measurement unit (IMU, accelerome- Yes Yes Yes Medium
ter,gyrometer,magnetometer)
Globalpositioningsystem(GPS)2 No Yes Yes Medium
Radio-frequencyidentification(RFID) Yes Yes Yes Low
1With outdoor infrared cameras. 2Requires satellite line-of-sight and/or indoor location beacons.
or event conditions is considered the ‘ecological validity’ [Bartlett, 2007; Elliott and Alderson, 2007;
Mu¨ndermann et al., 2006]. Controls include constraints on the field of play to ensure cameras maintain
the athlete in frame (the three-dimensional, 3D, volume space), the specific type of lighting, indoor
location (to avoid infrared light, saturation from sunlight, and inclement weather) and simplifying the
background with a neutral or blue screen [Bolink et al., 2015; Elliott and Alderson, 2007; Hong and
Bartlett, 2008; Lloyd et al., 2000]. Water-based sports have a unique set of challenges including the
refractive difference between liquid and air and the influence of the markers on the fluid dynamics
[Callaway et al., 2009].
Although the gold standard passive optical systems are less invasive than active wired methods,
the adverse psychological effects such as fear and distraction experienced by an athlete attempting to
reproduce a world-class performance in an unfamiliar environment should not be underestimated [Birrer
and Morgan, 2010; Corazza et al., 2006; Elliott et al., 2014; Hong and Bartlett, 2008; Mu¨ndermann
etal.,2006]. Whilebeingrecordedinthelaboratory, clinicalparticipantshavebeenobservedadoptinga
different gait, and athletes recorded with reduced muscular force [Mu¨ndermann et al., 2006; Shin et al.,
2011]. The requirement to attach markers to the body appears to disturb the athlete’s preparation
routine and performance not least because of the time it takes to attach a large number of markers –
sometimes more than 100 for a full 3D body model – but because markers inhibit movement, are easily
obscured and are susceptible to displacing and detaching. All of which increases the time required by
the athlete, the laboratory technician, the sport scientist attaching the markers and the facility itself
[Andriacchi and Alexander, 2000; Biasi et al., 2015; Corazza et al., 2006; Wong et al., 2007].
A motion capture system which is cost-effective, fast, and user friendly without impacting on the
game proper, represents the holy grail of sports biomechanics. Emerging optical markerless methods
which use multiple 2D video cameras, cite increasing in-competition practicality, with fewer drawbacks
thanamarker-basedapproach(restrictedvolumespace,initializationposture,movementtraining,single
subject) [El-Sallam et al., 2013, 2015; Elliott and Alderson, 2007; Joo et al., 2018; Lloyd et al., 2000;
Mehta et al., 2017; Ramakrishna et al., 2012]. However, questions remain about the accuracy, fidelity
and resolution of the kinematics derived using markerless motion capture, which for biomechanics at
least, is yet to displace the incumbent gold standard.
10
|
UWA
|
1.5. LITERATURE REVIEW
1.5.3 Alternatives to optical motion capture
Moore [1965] predicted computer processing power would double every two years and this increase
of capability and the corresponding decrease in component size has brought about unprecedented
opportunities for non-optical sport motion capture. This miniaturization, and the commoditization of
wearable sensors (or micro-electro-mechanical systems, MEMS), have created new opportunities for
non-invasive devices [Andriacchi and Alexander, 2000; Camomilla et al., 2018; Manal and Buchanan,
2002; Winter, 2009; Wong et al., 2007]. 3D laser and video scans, strain gauges, linear potentiometers,
radar (trackmangolf.com), ultrasound (dynamic-eleiko.com), and radio-frequency identification
(RFID, (zebra.com) are all employed by professional teams today, however a combined GPS and
inertial measurement unit (IMU) is now the most common device in many sports [Andriacchi and
Alexander, 2000; Bolink et al., 2015; Callaway et al., 2009; Camomilla et al., 2018; Krigslund et al.,
2013; Manal and Buchanan, 2002; Mengu¨¸c et al.; Mu¨ndermann et al., 2006; Winter, 2009].
GPS units deliver velocity (speed and direction) and positional information via line of sight to
medium Earth orbit satellites or indoor beacons. The term GPS is used colloquially to describe systems
which connect to the constellation of United States GPS satellites (gps.gov), the Russian ‘Glonass’
network (glonass-iac.ru), China’s ‘BeiDou’ (beidou.gov.cn), and the planned European Union
‘Galileo’ project (gsa.europa.eu). GPS-only systems are cited for poor validity with high-intensity
change of direction type movements due to their low capture frequency (typically <= 10 Hz), and
for their limitations indoors (or with the roof closed). Although the latter seems more due to a lack
of adoption of terrestrial beacon and ultra-wide band (UWB) geolocation technology to facilitate or
enhance satellite line-of-sight [Buchheit et al., 2014; Camomilla et al., 2018; Hennessy and Jeffreys,
2018; Nicolella et al., 2018; Zhang et al., 2009].
The generic term IMU is commonly used to describe a wearable device usually containing all
three of the following sensors: accelerometer; gyrometer; and magnetometer. This combination is also
sometimes referred to as a magnetic and inertial measurement unit (MIMU); or magnetic angular rate
gravity device (MARG). On their own, the three components of an IMU give only relative information,
each with specific features and limitations. The accelerometer provides rate-of-change in linear velocity,
the gyrometer gives angular velocity, both according to their own independent coordinate system
and which are susceptible to positional drift over time. The magnetometer assists with orientation
information according to Earth’s magnetic field but is affected by the proximity of ferromagnetic
materials, which can be a problem when considering other laboratory equipment. These issues with
IMU are compounded when multiple devices are required per participant, which much be synchronized,
and often which share bandwidth to a Bluetooth (bluetooth.com) or Wi-Fi bridge (ieee802.org).
Manufacturers use proprietary algorithms based on linear quadratic estimation, or Kalman filters
[Luinge and Veltink, 2005], to merge the output of the three sensors and convert the independent
relative measures into absolute coordinates with respect to a single global origin. Strongest results
appear to be achieved when the algorithm is tuned according to the movement characteristics of the
specific sporting task being recorded, e.g. throwing activities [Camomilla et al., 2018; Karatsidis et al.,
2016].
11
|
UWA
|
1.5. LITERATURE REVIEW
1.5.4 Adoption of new motion capture methods
Markerless optical or IMU motion capture has the potential
to address many of the drawbacks of existing marker-based
systems in a live game environment, however, there are few
examples of accurate and valid markerless methods being used
in the field. Typical of a number of studies which seek to
improve the accuracy and robustness of optical markerless sys-
tems by focusing the point cloud, Biasi et al. [2015] had their
participant wear a garment printed with a color-coded pattern
while Lenar et al. [2013] projected a structured light pattern
into the volume space. Both examples were limited to the lab-
oratory and their methods applied to planar flexion/extension
movements or gait related movement patterns. Abrams et al.
[2014] analyzed three types of tennis serve and is notable for
being one of the few studies to focus on the practical sport
application as opposed to the markerless method employed.
Carried out under “match-like” conditions the field of view
Figure 1.3: Bodysuit embedded
was limited due to the placement of video cameras on the
with seventeen inertial measure-
court, and results were qualitatively reported as “comparable
ment units (IMUs) [Karatsidis
to marker-based”. Vlasic et al. [2007] reported a blend of IMU et al., 2016].
and ultrasonic devices to capture a variety of sport movements
in the field, albeit with single participants in-frame and limited comparison against a marker-based
system; and Beshara et al. [2016] successfully blended IMU with Kinect (Microsoft, Redmond, WA)
data for measurement of shoulder range of movement. As devices get smaller, a growing area of research
is that of IMU devices embedded in clothing. Bolink et al. [2015] demonstrated such a system in a
laboratory dual data-capture which achieved “good agreement” with a marker-based system tracking
pelvis movement in the frontal and sagittal planes. Slyper and Hodgins [2008] reported garment-based
tracking using five IMUs attached to a shirt, for simple predetermined sport movements, however the
qualitative results lacked comparison with a marker-based reference. Karatsidis et al. [2016] used a
body suit embedded with seventeen IMUs to estimate GRF/M for walking gait and reported mean
correlations of GRF 0.9397 and GRM 0.8230 respectively (Figure 1.3).
On the field, coaches continue to seek new ways of measuring and tracking in order to maximize
player and team performance and availability; for recruitment and talent identification; to minimize
injury risk; and to enhance rehabilitation, e.g. by comparison with a pre-season baseline [Bradley and
Ade, 2018; Buchheit et al., 2014; Coutts et al., 2007; Rossi et al., 2018]. The consumer-led adoption of
wearable devices can cause problems for the sport biomechanist, because one of the main drawbacks
with current wearable sensor IMUs is the recording only of low-fidelity or unidimensional data (e.g.
velocity of a sensor positioned on the upper back; vertical GRF F ; or simply step frequency counts).
z
There is a risk that surrogate assumptions are being used to make gross estimations of workload
exposure, with little regard to the underlying biomechanical validity [Camomilla et al., 2018; Giddens
et al., 2017; Thiel et al., 2018].
12
|
UWA
|
1.5. LITERATURE REVIEW
1.5.5 Recording of ground reaction forces & moments
The ground reaction force (GRF) is the equal and
opposite force which acts orthogonally in three
dimensions at the foot during walking, running,
and sidestepping [Manal and Buchanan, 2002;
Winter, 2009]. Together with the associated rota-
tiongroundreactionmoments(GRMs)andcenter
of pressure information, GRFs are derived from
transducers build into the four corners of a force
plate or platform [AMTI, 2004; Kistler, 2004].
Figure 1.4: Schematic of force plate ground
Analysis of the telemetry provided by force plates
reaction forces and moments (GRF/M,
(Figure 1.4), the understanding of the external
biomch-l.isbweb.org).
forces acting on the body, can be invaluable to
clinicians and coaches, for quantitative gait and sport technique performance monitoring, and for
tracking orthopedic pathology or injury rehabilitation progress [Manal and Buchanan, 2002; Winter,
2009].
The two most common types of force transducer are the strain gauge (e.g. as used by AMTI,
Advanced Mechanical Technology Inc., Watertown, MA) and piezo-electric quartz crystal (e.g. from
Kistler Holding AG, Winterthur, Switzerland). Both types of transducer respond to applied force with
a corresponding electrical voltage, which is amplified, digitized, recorded and usually synchronized
with motion capture data for subsequent combined processing. The output voltage is proportional
to a change in the transducer’s electrical resistance, however with the strain gauge this is due to
mechanical distortion, but in the piezo-electric type a change in the crystal’s atomic structure. Different
manufacturers have independent design approaches to the characteristics of the transducer output
channels (and force plate coordinate system), but the resultant GRF/M is a standard measurement
[AMTI, 2004; Kistler, 2004; Manal and Buchanan, 2002; Winter, 2009].
Force plates, along with the different types of force transducers, can be affected by a variety of
systematic errors. It is essential that installation of the platform is carried out in such a manner as
to minimize vibration and to respect the frequency and absolute force magnitude of the intended
movement. Green-field building construction is the ideal scenario for optimal force recordings, whereby
the platform is mounted flush with the concrete pad of the ground floor. However, this is not always
feasible, makes the force plate challenging to move or install in outdoor sporting environments, and
errors can still be propagated from failures in maintenance, calibration, and operation [AMTI, 2004;
Collins et al., 2009; Kistler, 2004; Psycharakis and Miller, 2006]
1.5.6 Alternative GRF/M recording methods & downstream joint kinetics
Studies have attempted to measure GRF in the field through a variety of pressure sensitive sensors
or instrumented attachments [Burns et al., 2018; Choi et al., 2018; Faber et al., 2010; Liedtke et al.,
2007; Liu et al., 2010; Manal and Buchanan, 2002; Oerbekke et al., 2017; Renner et al., 2019; Seiberl
et al., 2018; Winter, 2009] (Figures 1.5 & 1.6). In-shoe devices measure points of contact or pressure
distributions (rather than center of pressure), and early examples suffered from being cumbersome
13
|
UWA
|
1.5. LITERATURE REVIEW
to the athlete (which can affect performance, e.g. tecgihan.co.jp). All but Liu et al. report
unidimensional vertical ground reaction force (vGRF), and differences are cited when compared
with embedded force plate data specifically in peak vGRF [Faber et al., 2010; Liedtke et al., 2007].
The one example found of machine learning
processing of pressure insole data achieves
mean correlations to ground truth of 0.94
for vGRF [Choi et al., 2018]. Two recent
commercial products, Loadsol (Novel Gmbh,
Munich, Germany), and OpenGo (Moticon
ReGo AG, Munich, Germany), are reported
[Burns et al., 2018; Oerbekke et al., 2017;
Renner et al., 2019; Seiberl et al., 2018]. Figure 1.5: Instrumented shoe prototype.Triaxial
Less invasive than previous solutions, the force sensors being used to estimate GRF [Liu et al.,
2010].
Loadsol studies extend the comparison of
insole performance to include running gait,
whereas the OpenGo is used for calculat-
ing gait events rather than GRF. Unfortu-
nately, the analyses of the Loadsol insoles
suffer from recording vGRF only, from a va-
riety of running speeds (a combined range
of 2.22 m/s to 3.33 m/s compared with the
literature walk-to-run definition of 2.16 m/s
Figure 1.6: OpenGo insole pressure sensors [Oer-
[Segers et al., 2007], and low sampling rates bekke et al., 2017]. Left, from above; center, from
affecting rate of force prediction at foot- below; right, sensor distribution).
strike and toe-off.
In attempts to maintain ecological valid-
ity, practitioners have proposed alternatives
to (a) bring the field into the laboratory by
mounting turf on the surface of the force
plate [Jones et al., 2009], or (b) embed force
plates into the field of play, in this case, a
baseball pitching mound [Yanai et al., 2017]
(Figure 1.7). However, the limited accuracy
of such methods has so far restricted their
Figure 1.7: Instrumented baseball pitching
usability to monitoring of simple gait (e.g. mound [Yanai et al., 2017].
walking), or estimation of a single force com-
ponent (primarily vertical F ). Portable, inexpensive platforms such as the Nintendo Wii Balance
z
Board (Nintendo, Kyoto, Japan) have also become popular, and are cited as a valid tool but one
limited to standing balance assessment [Clark et al., 2010]. For accurate and valid multidimensional
ground and joint kinetics of dynamic movements, the force plate continues to prevail, and together
with the dominant use of optical marker-based motion capture, these two systems are what ties the
14
|
UWA
|
1.5. LITERATURE REVIEW
majority of biomechanical analysis to the laboratory. An accurate and valid model for predicting
multidimensional GRF/M away from the laboratory would provide a welcome solution to many human
movement measurement use-cases.
A common downstream analysis of force plate outputs is to calculate knee joint moments (KJM)
via inverse dynamics (ID) [Manal and Buchanan, 2002; Yamaguchi and Zajac, 1989]. These are
calculations which rely on custom upper and lower body biomechanical models, examples of which have
been developed at UWA over the past twenty years, with the aim of providing repeatable kinematic
and kinetic data outputs [Besier et al., 2003; Chin et al., 2010]. The reason an understanding of
KJMs is of interest to many researchers is the potential link between high external knee moments
(abduction-adduction and internal-external rotation) during sidestepping movement activities, and
load related knee injury, particularly that of the anterior cruciate ligament (ACL) [Besier et al., 2001;
Dempsey et al., 2007]. The analysis to calculate ID can be time-consuming and therefore difficult to
compress into real-time using traditional methods. If the ACL injury mechanism is determined to
be an extreme load-related event, inherently linked to KJMs, then the ability to predict KJMs away
from the laboratory and in real-time may be an invaluable contributor to an on-field early warning
system for prevention of non-contact knee trauma. The ideal ensemble of real-time welfare monitoring
includes multidimensional GRF/M and joint moments [Calvert et al., 1976; Gabbett, 2018; Matijevich
et al., 2019; Vanrenterghem et al., 2017], but the problem is both the traditional motion capture inputs
and force plate outputs remain captive to the laboratory environment.
1.5.7 History of deep learning
In 1950, Alan Turing posed his famous challenge that “if a human is unable to tell – via a keyboard and
screen interface – whether they are communicating with another human or a machine does this mean
the computer is thinking?” [Turing, 1950], and in the first edition of the journal Machine Learning in
1986, the editorial considered “can a machine show improvement over time?” [Langley, 1986]. Thirty
years ago, the theory behind neural networks, and the potential of back-propagation algorithms for
building self-learning non-linear machine models, rather than just task-specific algorithms, was being
widely investigated but was limited by the lack of computer processing capacity of the time [Ashford
and Johnson, 1989; Kosko, 1988; LeCun et al., 2015; Moore, 1965].
This branch of machine learning, based on the neural network model of the human brain, and
which uses a number of hidden internal layers, came to be known as ‘deep learning’ [LeCun et al.,
2015]. It had been established that training deep learning models from scratch would require vast
amounts of raw data, usually described as ‘big data’, a term which evolved from informal Silicon Valley
meetings and seminars in the late 1990’s [Mashey, 1998]. Traditional relational database management
systems were increasingly struggling to cope with massive data sets, due to the scale of volume, velocity,
or variety (the “three V’s” of big data) [Laney, 2001]. But it was also already understood that it
was precisely this scale of big data, more than any other factor, which was necessary to improve the
performance of deep learning models [Goodfellow et al., 2016; LeCun et al., 2015].
Deeper networks have more learning capacity because they have more levels of hierarchical repre-
sentation and more levels of non-linearity. More learning capacity means that the network is capable of
learning more complex non-linear relationships between the data. The convolutions (sliding windows)
15
|
UWA
|
1.5. LITERATURE REVIEW
in convolutional neural network (CNN) architectures help by making the network deeper while reducing
the number of parameters to be learned, hence keeping the resource requirements under control.
But it wasn’t until the release of the large ImageNet dataset [Krizhevsky et al., 2012; Russakovsky
et al., 2015] containing millions of labeled images, that the CNN model called ‘AlexNet’ achieved
a breakthrough in accuracy, winning the 2012 ImageNet Large Scale Visual Recognition Challenge
(IVSLRC, image-net.org, Figure 1.8). This success was fueled by the new technique of fine-tuning
(transfer learning) whereby re-training occurs only for discrete higher-level components of an existing
related model, itself selected for its relevance to the new data-set [Szegedy et al., 2014]. In addition, the
transfer of model weights that have been learned from a more relevant task into the fine-tuning process
(double-cascade), one which has been tuned on a larger dataset, was found to result in better accuracy
[Zamir et al., 2018]. Capitalizing on existing CNN models via fine-tuning methods allows for training
from fewer samples (i.e. thousands instead of millions) with concomitant reductions in CPU and GPU
processing cost. The renewed momentum for research into deep learning led to the development of new
application frameworks, Caffe (Convolutional Architecture for Fast Feature Embedding), for example,
which was released by the AlexNet team, and maintained by Berkeley AI Research (BAIR) [Jia et al.,
2014].
In2015, anewmodelarchitecturecalled‘ResNet’wastheIVSLRCchallengewinner[Heetal.,2015].
With up to 152 layers, ResNet is much deeper than the eight layers of AlexNet, uses more convolutions
to simplify the depth, and includes residual connections to stabilize gradient back-propagation and
hence learning. Greater model depth provides an increase in accuracy and resilience to noise. The new
architectures reward greater raw detail with higher learning capacity because they retain the original
size and granularity of the input image through a much longer sequence of convolutions. Paired with
new deep learning frameworks like TensorFlow [Abadi et al., 2016], the potential for using ResNet to
drive new predictive analytics use-cases for biomechanics is particularly disruptive.
1.5.8 Practical applications of deep learning in biomechanics
Most practical AI applications today are supervised learning, e.g. classification (logistic) or linear
regression, whereby given an input predictor a model is trained on a discrete output label response.
Figure 1.8: Convolutional neural network architecture (adapted from au.mathworks.com).
16
|
UWA
|
1.5. LITERATURE REVIEW
In other words, the model prediction, to previously unseen input, is one of the object categories it
knows about, the label which it considers the strongest. Other use-cases, perhaps to model analog
waveforms, for example, require the CNN output layer to be modified for multivariate regression
(mlyearning.org, cs231n.github.io), and unsupervised learning, by comparison, is where the model
identifies clustering and association relationships and is used where the ground truth outcome is not
known.
The potential for biomechanics is the leveraging of its existing repositories of large-scale data,
which are traditionally isolated post-hypothesis inside individual laboratories or research institutions.
Machine learning offers new ways to extract value from this latent raw data, and even to raise the
bar for justification of new data capture (collection and analysis) when so much already exists. The
problem is that the increasingly constant stream of output produced by always-on sensors (satisfying
the big data “three V’s” [Laney, 2001]) has the potential to overwhelm practitioners, many of whom
will admit to reducing the number of components in the data stream (by combining or removing
feature columns) ahead of traditional offline linear statistical processing. This manual selection and
manipulation of input features, by the vendor or the client, risks throwing away information which
may have improved model training, mistaking overfitting for personalization, and reducing the model
generalization (imeasureu.com, kitmanlabs.com, vxsport.com). The opportunity is because of its
origins in image-based models, deep learning with CNN networks would appear well-suited to the scale
and 3D time-based data structures found in biomechanics [Cust et al., 2018; Krizhevsky et al., 2012].
And while deep learning requires larger amounts of training data, once the model is trained it can be
applied on any size data, even a single sample.
In recent years there has been a move in predicting GRF/M from kinematics and linear statistics
to machine learning models, to capitalize on the ability of this approach to model non-linearity whilst
being more robust to a lower signal to higher noise ratio [Cust et al., 2018; Fluit et al., 2014; Halilaj
et al., 2018; Jung et al., 2016; Yang et al., 2013]. Oh et al. trained a single hidden layer neural network
using 48 participants (one trial per participant) each walking at a self-selected pace and reported
mean correlations of GRF 0.9647 and GRM 0.8987 [Oh et al., 2013]. Early examples of a machine
learning approach attempted to train basic models (e.g. support vector machines rather than deep
learning) from scratch, and with small sample sizes (tens or hundreds of movement trials, rather than
thousands or millions) [Choi et al., 2013, 2018; Halilaj et al., 2018; Oh et al., 2013; Richter et al., 2018;
Rossi et al., 2018; Sim et al., 2015]. Meanwhile, reports of predicting GRF/Ms using computer vision
techniques lacked validation to a biomechanics gold standard or criterion reference [Soo Park and Shi,
2016; Wei and Chai, 2010] or relevance to sporting tasks [Chen et al., 2014].
More recently, researchers have begun to move towards more modern deep learning models like
TensorFlow [Abadi et al., 2016], but the sample sizes (obtained via tens of participants) remain much
smaller than expected (especially for training models from scratch rather than fine-tuning), which
risks sub-optimal network performance when the number of samples is less than the number of input
features [Goodfellow et al., 2016; Halilaj et al., 2018; Hu et al., 2018; LeCun et al., 2015; Zimmermann
et al., 2018]. Most of the literature seems to prefer the use of canned toolbox functions rather than
delving into the architecture and hyperparameter optimization instructions (‘prototxt’ files) of native
deep learning frameworks like Caffe and TensorFlow [Ancillao et al., 2018; Choi et al., 2018; Cust
17
|
UWA
|
1.6. SUMMARY
et al., 2018; Jacobs and Ferris, 2015; Jie-Han et al., 2018; Wouda et al., 2018]. In their review of sport
movement classification (rather than biomechanical output systems), Cust et al. highlight the enhanced
opportunities provided by deep learning and CNN models [Cust et al., 2018], which are dependent on
a combined multidisciplinary data science and sport science research approach [Halilaj et al., 2018].
Utilized by two studies in this example set, [Hu et al., 2018; Zimmermann et al., 2018], the
convention for dealing with spatio-temporal data is to employ the complexity of Long Short-Term
Memory (LSTM) layers [Goodfellow et al., 2016] into the network architecture. By matching the input
feature topology, the representation of spatio-temporal data as static 2D images (flattened) would
potentially allow for biomechanical analyses to re-use the performance of existing models, avoiding the
LSTM architecture problem altogether [Du et al., 2015; Ke et al., 2017]. This fine-tuning strategy,
to re-train existing models pre-trained on big data, would facilitate deep learning from relatively
small sample sizes (hundreds or thousands of trials) and seems to be an untapped opportunity for
biomechanists.
1.6 Summary
Biomechanical laboratory equipment is generally unusable in the field. The use of surrogate linear
output models tends to oversimplify biomechanical parameters to such an extent that relevance to
the biomechanist, athlete, and coach is rendered unusable. A new approach would be to use deep
learning models to estimate GRF/M and KJM from motion capture in lieu of captive force plate
instrumentation and analyses. Such technology would not only challenge the normal science of research
methods, but the practical application would potentially bring the benefits of in-competition (and
on-site) quantitative kinetics to the athlete and clinical patient in terms of monitoring performance
and skill development, reducing injury risk, and improving rehabilitation outcomes.
1.7 References
M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,
and M. Devin. TensorFlow: Large-scale machine learning on heterogeneous distributed systems.
arXiv:1603.04467, 2016.
P. Abbeel, A. Coates, and A. Y. Ng. Autonomous helicopter aerobatics through apprenticeship learning.
The International Journal of Robotics Research, 29(13):1608–1639, 2010. ISSN 0278-3649.
G. D. Abrams, A. H. Harris, T. P. Andriacchi, and M. R. Safran. Biomechanical analysis of three
tennis serve types using a markerless system. British Journal of Sports Medicine, 48(4):339–342,
2014. ISSN 1473-0480.
P.Allard,A.Cappozzo,A.Lundberg,andC.Vaughan. Three-dimensionalanalysisofhumanlocomotion.
John Wiley & Sons, Chichester, West Sussex, England, 1997. ISBN 0471969494.
AMTI. Biomechanics platform instruction manual. Advanced Mechanical Technology Inc, Watertown,
MA, 2004. ISBN 1134298730.
18
|
UWA
|
Predicting athlete ground reaction
forces and moments from motion
capture
2.1 Abstract
An understanding of athlete ground reaction forces and moments (GRF/Ms) facilitates the biomecha-
nist’s downstream calculation of net joint forces and moments, and associated injury risk. Historically,
force platforms used to collect kinetic data are housed within laboratory settings and are not suitable
for field-based installation. Given that Newton’s Second Law clearly describes the relationship between
a body’s mass, acceleration and resultant force, is it possible that marker-based motion capture can
represent these parameters sufficiently enough to estimate GRF/Ms, and thereby minimize our reliance
on surface embedded force platforms? Specifically, can we successfully use Partial Least Squares (PLS)
regression to learn the relationship between motion capture and GRF/Ms data? In total, we analyzed
eleven PLS methods and achieved average correlation coefficients of 0.9804 for GRFs and 0.9143 for
GRMs. OurresultsdemonstratethefeasibilityofpredictingaccurateGRF/Msfromrawmotioncapture
trajectories in real-time, overcoming what has been a significant barrier to non-invasive collection of
such data. In applied biomechanics research, this outcome has the potential to revolutionize athlete
performance enhancement and injury prevention.
Keywords: Action recognition · Wearable sensors · Computer simulation.
2.2 Introduction
One of the strongest criticisms of sports biomechanics is that measurements of GRF/Ms, necessary for
the estimation of internal and external musculoskeletal loads and associated injury risk, can only be
collected in controlled research laboratory environments using external force transducers. Subsequently,
the sport biomechanist is forced to trade ecological validity of the more desirable field-based data
collection for laboratory-based methods in order to record higher fidelity data outputs (Figure 2.1)
[Bartlett, 2007; Chiari et al., 2005; Elliott and Alderson, 2007; Mu¨ndermann et al., 2006].
Kneeanteriorcruciateligament(ACL)injurycanbeaseasonorcareer-endingeventforaprofessional
athlete and increases the risk of later osteoarthritis pathology [Dallalana et al., 2007; Filbay et al., 2014].
The majority of ACL injuries (51 to 80 %) occurring in team sports such as Australian Rules Football,
29
|
UWA
|
2.2. INTRODUCTION
basketball and hockey are non-contact in nature, with more than 80 % suffered during a sidestep
maneuver or single-leg landing [Donnelly et al., 2016 (in press; Shimokochi and Shultz, 2008]. In-silico,
in-vitro and laboratory studies have identified an increase in knee joint moments as indicators of ACL
injury risk [Dempsey et al., 2007; Donnelly et al., 2012; Hashemi et al., 2011] and an understanding
of on-field GRF/Ms constitutes the first step towards the development of a monitoring system that
estimates knee joint moments, thereby providing an early warning system for ACL injury risk. The
ability to monitor real-time ACL injury risk enables the development of counter-measure preventative
strategies including new biofeedback measures.
Previous studies have attempted to im-
prove the ecological validity of laboratory based
GRF/Msdatacollections,withMu¨lleretal.[2010]
investigating properties of artificial turf using
varying shoe stud configurations. Samples of turf
were mounted to the surface of a force plate and
50 m2 of the surrounding area. Similarly, Jones
et al. [2009] tested the effects of different artifi-
cial turf types on landing and knee biomechan-
ics by mounting samples in a tray fixed above
the force plate. Others have attempted to mea-
sure GRF/Ms in the field through a variety of
in-shoe pressure-sensitive sensors or attachments
[Liu et al., 2010; Manal and Buchanan, 2002; Sim
et al., 2015; Winter, 2009], however, such devices
suffer from being cumbersome to the athlete and
measurepointsofcontactorpressuredistributions
Figure 2.1: Laboratory motion and force
(rather than center of pressure). Importantly, the plate data capture overlay. The force plate is
reported values differ significantly from those de- highlighted blue, markers used are shown artificially
enlarged and colored red/orange/green, those not
rived directly from force plates, although Sim
used have been reduced and grayed (real and vir-
et al. [2015] did cite improvements via the use of
tual/modelled markers).
neural networks (NNs). Researchers have derived
GRF/Ms from kinematics using linear statistics, or again from NNs [Jung et al., 2016; Oh et al., 2013],
with these studies conducted indoors using gait trials. Jung et al. [2016] tested ten participants at
speeds up to 3.0 m/s while Oh et al. [2013] trained a single hidden layer NN using 48 participants
(one trial per participant) each walking at a self-selected pace. Efforts to predict GRF/Ms using
non-invasive computer vision techniques show promise but either lack validation to a gold standard or
criterion reference [Soo Park and Shi, 2016; Wei and Chai, 2010] or relevance to sporting tasks [Chen
et al., 2014]. This paper proposes a novel approach, where the scale of historically collected big data is
used to predict GRF/Ms using the input variables: (1) eight marker motion capture trajectories, and
(2) participant mass, sex and height [Alderson and Johnson, 2016].
The School of Human Sciences at The University of Western Australia (UWA) was one of the
first to establish a Sport Science/Human Movement university degree in the southern hemisphere
30
|
UWA
|
2.3. BACKGROUND
and houses one of the largest sports related marker-based movement data repositories in the world
[Bloomfield, 2012]. This study capitalizes on this data by employing PLS [Mevik and Wehrens, 2007]
and its kernel variants to learn linear and nonlinear models whereby, given a new sample of motion
capture data (marker-based data) we can estimate a participant’s GRF/Ms in the absence of a force
plate. The accuracy and validity of this approach is confirmed by reporting the mean correlations
between GRF/Ms traditionally derived, and those predicted by the PLS methods. We aim first to test
the hypothesis that our interpretation of mass and acceleration (via motion capture marker data) and
force (recorded from a force plate) is complete enough that PLS can establish a strong relationship
between these variables.
2.3 Background
For over thirty years, Vicon (Oxford Metrics, Oxford, UK) has been developing motion capture
technology, and the company is considered the world leading gold standard manufacturer of passive
marker-based motion analysis systems. High-speed video cameras together with near-infrared (IR)
light strobes are used to illuminate small spherical retro-reflective markers attached to the body [Chiari
et al., 2005; Lloyd et al., 2000], with Carse et al. [2013] citing the reconstruction error of such optical
systems at less than 0.3 mm [Elliott and Alderson, 2007].
Often captured concurrently with motion data, force platforms/plates are used to measure the
forces and moments applied to its top surface as a participant stands, steps (walk/run), jumps from
or lands on it. Three orthogonal force (axes) and three moment components are measured when a
participant is in contact with the plate including F and F representing the horizontal (shear) forces
x y
and F the vertical force, and M , M and M the three rotation moments around the corresponding
z x y z
x, y and z force axes respectively. Force platforms used to record this data may utilize a wide variety
of force transducer types (e.g. piezo-resistive, piezo-electric) which are generally located in each of the
four corners of the platform. Installation of force plates must be carried out in such a manner as to
minimize vibration, and with regard to the frequency and absolute force of the intended movement to
be captured. For this reason, specialized force plate mounting, directly inside a concrete pad during
laboratory construction, produces the best ongoing results [AMTI, 2004] but which makes the platform
difficult to move or install in sporting environments. GRF/Ms are fundamental to the calculation of
joint kinetics, the forces that lead to movement [Winter, 2009], and consequently this information is
critical for all research that seeks to gain an understanding of the mechanism behind performance,
injury and disease.
PLS is a class of supervised multivariate regression techniques which projects data to a lower
dimensional space where the covariance between predictor and response variables is maximized [De Bie
et al., 2005]. This generally leads to a more accurate regression model compared with, for example,
Principal Component Regression (PCR) which maximizes the variance of the predictor variables
without taking into account the response variables. PLS is generally referred to as a multilinear
regression (MLR) technique, however, it is able to perform nonlinear regression by projecting the
data to a higher dimensional nonlinear space where the relationship between the two variable types
is linear [Boulesteix, 2004]. First developed in the 1960’s, the characteristic of PLS to perform well
with many predictor variables but few examples was found to be a good fit for statistical problems in
31
|
UWA
|
2.4. METHODS
Figure 2.3: UWA custom in-house marker
set. The eight markers used by this study have
been highlighted.
Figure 2.2: Study overall design.
the natural sciences [Chun and Kele¸s, 2010; Mevik and Wehrens, 2007]. More recently, sparse PLS
techniques have emerged which can better deal with multivariate responses when some of the predictor
variables are noisy. Because of the economic nature of marker-based motion capture representation
(compared with video for example) a secondary hypothesis for this study is that sparse PLS will return
the strongest predictor (motion capture plus mass, sex and height) to output (GRF/Ms) response.
2.4 Methods
2.4.1 Design & setup
The methodological design schematic of this study is shown in Figure 2.2. Original setup and data
capture was carried out at one of the two UWA Sports Biomechanics Laboratories (Figure 2.4) over a
15–year period (2000–2015). All participants used in the archive studies were from a young healthy
athletic population (male and female, amateur to professional) as opposed to any medical or clinical
cohort. Dynamic movement trials included a wide variety of generic movement patterns such as
walking and running, but also sport-specific movements such as football kicking and baseball pitching.
UWA employs a custom, repeatable and well published upper and lower limb marker set comprising
67 full body retro-reflective markers [Besier et al., 2003; Chin et al., 2010; Dempsey et al., 2007].
This includes markers placed arbitrarily on body segments and markers positioned on anatomically
relevant landmarks used to define the joint centers and axes required for anatomical coordinate system
definition (e.g. pelvis anterior superior iliac spines, lateral ankle malleoli). Given that the marker
set has evolved considerably over the 15–year period a subset of markers was identified that were
consistently and reliably present across all static and dynamic trials of the motion data repository.
With the goal of describing movement completely enough that PLS can establish the motion–force
relationship, and following earlier pilot testing with larger and smaller marker subsets, the following
eight anatomically-relevant markers were selected for inclusion in the present study (Figure 2.3):
32
|
UWA
|
2.4. METHODS
C7, SACR sacrum (automatically constructed between LPSI and RPSI – posterior superior
iliac spine left and right), LMT1 left hallux (big toe), LCAL left calcaneus (heel), LLMAL
left lateral ankle malleolus (outer ankle), and likewise for the right foot RMT1, RCAL, and
RLMAL.
Between 12–20 Vicon near-infrared cameras across a combination of model types (MCam2, MX13 and
T40S) were mounted on tripods and wall-brackets and aimed at the desired reconstruction volume space
(Figure 2.4). Camera calibration (static and dynamic) for all data collection sessions was conducted
in accordance with manufacturer recommendations. An AMTI force plate (Advanced Mechanical
Technology Inc, Watertown, MA, USA) measuring 1,200 × 1,200 mm, operating at 2,000 Hz and
installed flush with the floor was used to record the six GRF/Ms: F , F , F , M , M and M . The
x y z x y z
biomechanics laboratory is a controlled space which utilizes lights and wall paint with reduced IR
properties. The floor surface coverings have varied over the 15–year data collection period ranging
from short-pile wool carpet squares to artificial turf, both laid on the force plate surface and the
wood parquetry surrounding the platform. The relevant proprietary motion capture software that was
distributed by the system hardware manufacturer at the time of data collection was used to record and
reconstruct the marker trajectories. Irrespective of hardware and software configuration at the time of
data collection all reconstructed marker data was compiled and stored in the industry standard c3d file
format for motion trajectory and analog data (‘coordinate 3D’, Motion Lab Systems, Baton Rouge, LA).
2.4.2 Data mining phase
Over the past two decades, much attention has been paid to identifying the biomechanical precursors
to ACL injury and consequently the analysis of change of direction (sidestep) maneuvers has been a
strong research theme of the biomechanics group at UWA and their collaborators. Given this long data
collection history and the subsequent likelihood of a large number of sidestepping motion trials within
the legacy motion capture repository, this paper focuses on establishing the motion–force relationship
of a single motion trial type: sidestep maneuvers to the left that are performed off the right limb (i.e.
right foot plant, Figures 2.5 and 2.6). Data mining of the department’s motion/force plate capture
repository was carried out under UWA ethics exemption RA/4/1/8415. Contrary to the traditional
scientific method approach of the sport sciences, the philosophy of this study was one of scale, with a
mandate to use data capture from as many different sessions as possible (intra-laboratory, multiple
testers), and to avoid manual editing of source c3d files. Data mining was conducted using MATLAB
R2016b (MathWorks, Natick, MA) in conjunction with the Biomechanical ToolKit v0.3 [Barre and
Armand, 2014] both running on Ubuntu v14.04 (Canonical, London, UK), a development environment
being well-suited to the prototype nature of the study. Hardware employed was a desktop PC, Core i7
4GHz CPU, with 32GB RAM.
From a given top-level folder, the file-system was scanned for motion capture standard c3d files, to
which several pre-processing steps were applied to confirm the integrity of the marker trajectories and
force plate data before a trial was deemed acceptable and added to the overall data-set. First, the data
mining relied only on trials with contiguous markers being labeled and present in the trial and was
agnostic to any post-processing artefact associated with filtering or biomechanical modeling (i.e. we
only utilized the labeled trajectories of eight real markers). Mass was considered a mandatory input
34
|
UWA
|
2.4. METHODS
feature but it was theorized that sex (female = 1, male = 0) and height may also have an important
contribution, so they were added to the predictor (input) variable set. These participant specific values
(mass, sex and height) were retrieved from the c3d file or the associated mp file (mp is a proprietary
extensible markup language XML file format used by Vicon for session and anthropometric data). At
this time, children were excluded by rejecting trials where the participant height was less than 1,500
mm (two standard deviations below the average Australian adult female height 1,644 ± 72 mm, age
19–25 years [Ward, 2011]).
The foot-strike event was automatically determined by detecting vertical force F greater than a
z
threshold (20 N) over a defined period (0.025 s) [Milner and Paquette, 2015]. Compared with trials
where the foot-strike event was previously visually identified by the biomechanist undertaking the
originaldatacollection, themeancorrespondenceoftheautomaticmethodwas±0.0054s. Analogforce
plate data sampled at frequencies lower than 2,000 Hz and motion capture lower than 250 Hz were
time normalized using piecewise cubic spline interpolation. The lead-in period before the foot-strike
was deemed to be more important for the predictor movement, and therefore the marker data was
trimmed around the foot-strike event from -0.20 to +0.30 s (125 frames f), and force plate data from
-0.05 to +0.30 s (700 frames f).
A number of consistency checks were performed to consider the overall integrity of the laboratory
equipment setup and calibration. Trials where the participant appeared to move backward, where the
vertical height of markers was unexpected, where all marker coordinates dropped to zero (i.e. missing
data), where the start and end vertical force value was unexpected, or the foot-strike was incomplete,
were rejected. Templates were used to automatically classify the range of indoor movements found into
one of six types:
Static (still), walk, run, run and jump, sidestep left and sidestep right (regardless of whether
the sidestep was planned or unplanned, crossover or regular, or foot-strike technique).
If the motion capture and force plate data passed these checks for quality, it was reassembled into the
data-set arrays X (predictor samples × input features) and y (response samples × output features)
typical of the format used by multiple regression [Mevik and Wehrens, 2007], Figure 2.7. Trials with
duplicate X data were rejected, therefore avoiding the situation where the same motion capture input
referred to multiple pre and post-filtered analog force plate data.
Ethics approval was based on the only personal information collected (that of mass, sex and height)
being de-identified and acknowledged that the new data science techniques being employed by the
current investigation are within the scope of the original studies and would have been included had
they existed at the time. In terms of intellectual property of the motion capture pipeline, only the first
step of labeling and gap-fill is required by this study, later analysis including modeling, filtering and
classification by meta-data is disregarded.
2.4.3 Training phase
We performed 10-fold cross-validation using a number of PLS methods to test whether our description
of movement and force was sufficient, the goal being a strong correlation coefficient. The data-set was
randomly shuffled and split into ten training-sets (353 samples = 80 %, illustrated for each of the eight
35
|
UWA
|
2.4. METHODS
Figure 2.8: Sidestep left eight marker trajectories shown by MATLAB, for one training-
set (353 examples = 80 %). The physical location of the markers is given in Figure 2.3.
markers in Figure 2.8) and corresponding test-sets (88 samples = 20 %), then for each PLS method,
the predicted GRF/Ms were compared with the actual recorded force plate analog output. The use of
10-fold experiments decreased the risk of overfitting [Domingos, 2012]. A total of eleven PLS methods
were compared, three from PLS Toolbox v8.1.1 (EVRI Eigenvector Research, Inc., Manson, WA, USA),
four from the R-pls package [Mevik and Wehrens, 2007], and four from the R-spls Sparse PLS package
[Chun and Kele¸s, 2010]. PLS Toolbox runs directly in MATLAB while pls and spls functions were
executed using system calls from MATLAB to R [R Core Team, 2016]. Handshake protocols were
used between MATLAB and R to ensure success/fail conditions were exchanged. Within this mix of
three proprietary and open source PLS packages, different fit algorithms were investigated for their
prediction power, performance, and in the case of sparse implementations, variable selection, for the
given multivariate data-set where the number of predictor variables (3003) was much greater than
the number of training samples (353). Model training and prediction times were used to illuminate
differences between methods such as Kernel and Orthogonal Scores PLS which produce the same results.
Overall, PLS methods were selected for relevance to (a) perceived state of the art, (b) anticipated
benefits of including non-linear kernel methods to match non-linearity in the source data, and (c)
sparse methods to capitalize on the ranking importance of predictor input markers rather than the
traditional PLS approach of simply maximizing the covariance between predictors and response.
The primary tuning parameter for PLS is the number of hidden internal components, n . For every
c
sample in the test-set, the mean correlation coefficient r was calculated by comparing the six vectors
F , F , F , M , M and M of ground truth force plate data with that predicted by the specified
x y z x y z
PLS method. A range of n from 1 to 81 (in steps of 5) was used to select n via the corresponding
c c
maximum r by GRF/Ms for the subsequent 10-fold experiments. This range was arrived at empirically
using the root mean squared error of prediction (RMSEP) function in R-pls; use of the mean squared
prediction error (MSPE) in R-spls; and by noting the maximum value at which MATLAB exhausted
36
|
UWA
|
2.5. RESULTS & DISCUSSION
system memory. MSPE was also used to determine the sparsity tuning parameter eta of 0.9. Although
this granular approach increased the risk of missing the precise optimal value of n , meaningful results
c
were observed. The average n over all GRF/Ms for each PLS method gave a range of training times
c
from 00:00:10.534 (hh:mm:ss.sss) for R-pls Wide Kernel PLS to 00:18:28.552 R-spls Orthogonal Scores
PLS (mean timing over ten iterations).
2.5 Results & discussion
A high-potential subset of the entire historical
archive containing 20,066 c3d files was scanned,
and after quality assurance and automatic catego-
rization of movement type, a total of 441 sidestep
left-directed motion trials were identified. The
original data capture for these trials was carried
out between February 7, 2007 and November 12,
2013 using a range of Vicon proprietary software
(from Workstation v5.2 to Nexus v2.2).
The mean correlation coefficient r between
the estimated and actual GRF/Ms was calculated
using the n derived by the earlier cost analy-
c
sis, for which the prediction times ranged from
00:00:00.064 (hh:mm:ss.sss, mean timing over ten
Figure 2.9: R-spls SIMPLS performance
iterations) for EVRI-pls Direct Scores PLS to againstthedata-setovertherangeofn from
c
00:00:00.403R-plsKernelPLS.TheMean ± SD 1 to 81.
between each of the ten folds, and prediction times, by PLS method and by GRF/Ms are given in
Table 2.1 (and illustrated by animation Online Resource 1), in which the best values of r by GRF/Ms
are shown in bold, as are r(F ) and r(M ) for the strongest package overall.
mean mean
The highest correlation was seen in the vertical r(F ), explained by the influence of mass in this
z
axis and the corresponding greater variation for PLS to associate with. R-spls SIMPLS was identified
as the strongest method overall, with average r of 0.9804 for GRFs and 0.9143 for GRMs. These high
correlation coefficients proved the hypothesis, that our interpreted force, mass and acceleration by the
abstract methods of marker-based motion capture were sufficient enough to establish a strong relation-
ship with the analog force plate output. The combined Mean ± SD results r(F ) 0.9796 ± 0.0004
mean
and r(M ) 0.9113 ± 0.0036 illustrate the proximity of all the PLS methods investigated.
mean
Figure 2.9 illustrates the performance of R-spls SIMPLS for r(F ) and r(M ) over the range
mean mean
Table 2.2: Comparison PLS to single hidden layer NN, r by GRF/Ms.
Movement Samples r(Fx) r(Fy) r(Fz) r(Mx) r(My) r(Mz) r(Fmean) r(Mmean)
Ohetal.[2013],maximumr Walking 48 0.9180 0.9850 0.9910 0.9870 0.8410 0.8680 0.9647 0.8987
R-splsSIMPLS,maximumr Sidestep 441 0.9985 0.9981 0.9994 0.9762 0.9956 0.9877 0.9987 0.9865
R-splsSIMPLS,Table2.1 Sidestep 441 0.9669 0.9847 0.9898 0.8807 0.9405 0.9216 0.9804 0.9143
38
|
UWA
|
2.5. RESULTS & DISCUSSION
‡
Table 2.3: Relative influence (RI) of inputs on GRF/Ms output determined by R-spls
SIMPLS.
Input m s h RMT1 RCAL RLMAL C7 LCAL SACR LMT1 LLMAL
RI 100% 100% 100% 65% 58% 57% 41% 39% 31% 24% 7%
‡Toscore100%,allthreeaxesofamarker(x/y/z)mustbeselectedbythePLSmethodinallmotioncaptureframes.
of n from 1 to 81. Ahead of n 55 selected by the cost analysis for this PLS method, the high r(F )
c c mean
offsets the gradual decline in r(M ). At greater n this relationship breaks down as r(M ) is
mean c mean
increasingly affected by noise.
WithR-splsSIMPLSoutperformingothermethods,thesecondhypothesisthatasparsePLSmethod
would prevail was also proven. The individual sample with the highest r(F ) was identified for
mean
R-spls SIMPLS and Figure 2.10 shows the predictions for this sample by the SIMPLS implementation
by each of the three packages.
The mean R-spls SIMPLS results exceed the maximum correlation coefficients r for the six vectors
as reported by Oh et al. [2013] and shown in Table 2.2. Using PLS, rather than a single hidden layer
NN, with a data-set an order of magnitude greater (441 versus 48 samples), our study demonstrated
greater correlations for a more complex movement pattern (sidestep versus walking gait), and the
importance of data scale for NNs.
Figure2.10: GroundtruthGRF/Ms(blueticks)andpredicted(red), plottedasF , F , F ,
x y z
M , M and M versus force plate frame for the same sample using each of the strongest
x y z
PLS methods by package: EVRI-pls SIMPLS, R-pls SIMPLS and R-spls SIMPLS. The
sample was selected for having the highest r(F ) with R-spls SIMPLS.
mean
39
|
UWA
|
2.6. CONCLUSIONS
Sparse PLS methods by nature retain the input features useful for prediction, and therefore R-spls
SIMPLS can be used to illustrate the relative influence of markers and mass/sex/height. Using fold
one of the training-set/test-set split, the movement type is confirmed as sidestep left by virtue of the
greater emphasis on the markers of the right stance foot (RMT1, RCAL and RLMAL) at the expense
of those on the swing limb on the left (Table 2.3).
2.6 Conclusions
To the best of our knowledge, this is the first study which mines big data to predict GRF/Ms of a
complex movement pattern from marker-based motion capture (and using a reduced marker set). We
investigated the connection between PLS and the relationship of marker-based motion capture to
force plate output. Using historical movement and force data (441 sidestep samples), and eleven PLS
methods, we observed average correlation coefficients between ground truth and predicted of 0.9804 for
GRFs and 0.9143 for GRMs thus proving our first hypothesis. This strongest response was predicted
by the R-spls SIMPLS sparse method in support of our second hypothesis.
Our results using PLS methods against a complex sidestep movement pattern improved on those
reported using a single hidden layer NN and a simple gait pattern by Oh et al. [2013] illustrating
the relevance of big data. We intend to extend this work through greater intra and inter-laboratory
historical data, to analyze other movement patterns, validate in real-time with a dual data capture in
the laboratory, then ultimately test in the field of play with outdoor cameras and less invasive methods
of motion capture. The information provided by R-spls allows for fine-tuning of motion and force
temporal input parameters, and an investigation of the relative importance of markers and the discrete
features mass/sex/height. The success of PLS methods suggests this data is a candidate for deep
learning. This study begins to address the significant barrier to non-invasive collection of real-time
on-field kinetic data to inform athlete performance enhancement and injury prevention.
2.7 Acknowledgements
This project was partially supported by the ARC Discovery Grant DP160101458 and an Australian
Government Research Training Program Scholarship. We gratefully acknowledge NVIDIA for providing
a GPU through its Hardware Grant Program, and Eigenvector Research for the loan licence of
PLS Toolbox. Portions of data included in this study have been funded by NHMRC grant 400937.
2.8 References
J. Alderson and W. Johnson. The personalised ‘Digital Athlete’: An evolving vision for the capture,
modelling and simulation, of on-field athletic performance. In 34th International Conference on
Biomechanics in Sports, 2016.
AMTI. Biomechanics Platform Instruction Manual. Advanced Mechanical Technology Inc, Watertown,
MA, 2004. ISBN 1134298730.
40
|
UWA
|
2.8. REFERENCES
L. Mu¨ndermann, S. Corazza, and T. P. Andriacchi. The evolution of methods for the capture of
human movement leading to markerless motion capture for biomechanical applications. Journal of
NeuroEngineering and Rehabilitation, 3(6):j1–11, 2006. ISSN 1743-0003.
S. E. Oh, A. Choi, and J. H. Mun. Prediction of ground reaction forces during gait based on kinematics
and a neural network model. Journal of Biomechanics, 46(14):2372–2380, 2013. ISSN 0021-9290.
R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical
Computing, Vienna, Austria, 2016. URL www.R-project.org.
Y. Shimokochi and S. J. Shultz. Mechanisms of non-contact anterior cruciate ligament injury. Journal
of Athletic Training, 43(4):396–408, 2008. ISSN 1062-6050.
T. Sim, H. Kwon, S. E. Oh, S.-B. Joo, A. Choi, H. M. Heo, K. Kim, and J. H. Mun. Predicting
complete ground reaction forces and moments during gait with insole plantar pressure information
using a wavelet neural network. Journal of Biomechanical Engineering, 137(9):091001:1–9, 2015.
ISSN 0148-0731.
H. Soo Park and J. Shi. Force from motion: Decoding physical sensation in a first person video.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
3834–3842, 2016.
S. Ward. Anthropometric data and australian populations – do they fit? In HFESA 47th Annual
Conference 2011, 2011.
X. Wei and J. Chai. Videomocap: Modeling physically realistic human motion from monocular video
sequences. In ACM Transactions on Graphics (TOG), volume 29, page 42. ACM, 2010. ISBN
1450302106.
D. A. Winter. Biomechanics and motor control of human movement. John Wiley & Sons, Hoboken,
NJ, 2009. ISBN 0470398183.
43
|
UWA
|
Predicting athlete ground reaction
forces and moments from
spatio-temporal driven CNN models
3.1 Abstract
The accurate prediction of 3D ground reaction forces and moments (GRF/Ms) outside the laboratory
setting would represent a watershed for on-field biomechanical analysis. To extricate the biomechanist’s
reliance on ground embedded force plates, this study sought to improve on an earlier Partial Least
Squares (PLS) approach by using deep learning to predict 3D GRF/Ms from legacy marker-based
motion capture sidestepping trials, ranking multivariate regression of GRF/Ms from five convolutional
neural network (CNN) models. In a possible first for biomechanics, tactical feature engineering
techniques were used to compress space-time and facilitate fine-tuning from three pre-trained CNNs,
from which a model derivative of ImageNet called ‘CaffeNet’ achieved the strongest average correlation
to ground truth GRF/Ms r(F ) 0.9881 and r(M ) 0.9715 (rRMSE 4.31 and 7.04 %). These
mean mean
results demonstrate the power of CNN models to facilitate real-world multivariate regression with
practical application for spatio-temporal sports analytics.
Keywords: Biomechanics · Supervised learning · Image motion analysis · Pattern analysis.
3.2 Introduction
Conventional methods to generate GRF/Ms data required for the accurate estimation of joint forces
and loads are confined to biomechanics laboratories far removed from the sporting field of play. This
has been an ongoing frustration for sports biomechanists, who must forego the ecological validity
of field-based data collections and manage the constraints of synthetic laboratory environments to
accurately model musculoskeletal loading parameters (Figure 3.1) [Chiari et al., 2005; Elliott and
Alderson, 2007].
In the laboratory, biomechanists commonly rely on gold standard marker-based passive retro-
reflective systems which utilize high-speed modified video cameras (up to 2,000 Hz) that project
strobes of infrared (IR) light onto small spherical retro-reflective markers attached to a participant’s
body [Chiari et al., 2005; Lloyd et al., 2000]. The typical error of such systems for dynamic sporting
movement is cited at < 2 mm [Merriaux et al., 2017]. Alongside motion data, it is common practise to
45
|
UWA
|
3.2. INTRODUCTION
capture synchronized GRF/Ms (shear forces F , and F , vertical force F , and their corresponding
x y z
rotation moments M , M and M ) as measured by transducers located in the four corners of the force
x y z
platform (Advanced Mechanical Technology Inc., Watertown, MA, USA).
An increasingly popular approach to on-field
data capture is to instrument the player with
wearable sensors, with early examples of basic
in-shoe pressure sensors and instrumented force
shoes [Burns et al., 2017; Faber et al., 2010; Liu
et al., 2010] evolving to current athletes wear-
ing inertial sensors on multiple body segments
[Karatsidis et al., 2016; Pouliot-Laforte et al.,
2014; Wundersitz et al., 2013]. However, the lin-
ear modeling applied to the sensor outputs from
current wearable devices tends to overfit a par-
ticular movement (e.g. simple gait), or favor the
vertical force (F ) component only [Camomilla
z
et al., 2018].
From a sports injury perspective, rupture of
the anterior cruciate ligament (ACL) can be one
of the most serious for the community and profes-
Figure 3.1: Laboratory motion and force
sional athlete [Dallalana et al., 2007], with up to plate data capture overlay. The eight labeled
a reported 80 % incidence rate being non-contact markers used are shown artificially colored and en-
larged, and visible through the body. The force
in nature, and 80 % of these events occurring
plate is highlighted blue, and the ground reaction
during a sidestep or single-leg landing maneuver
forces and moments depicted.
[Donnelly et al., 2016; Shimokochi and Shultz,
2008]. Studies cite elevated knee joint moments (KJMs) as a primary indicator of ACL injury risk
[Besier et al., 2001; Dempsey et al., 2007], and given that the most common approach to estimating
KJMs is via inverse dynamics methods that require GRF/Ms input, data of these type are an obvious
candidate for further investigation.
Previous computer vision and data science researchers have attempted to estimate GRF/Ms,
however studies suffer from poor validation to ground truth data, are not sports related [Chen et al.,
2014; Soo Park and Shi, 2016; Wei and Chai, 2010], require a full body modeling protocol with multiple
inputs [Fluit et al., 2014], or as before, predict only unidirectional GRF components (e.g. vertical F )
z
[Yang et al., 2013]. Further, machine learning approaches in biomechanics appear to be limited to
simple models and small sample sizes [Choi et al., 2013; Oh et al., 2013; Richter et al., 2018; Sim et al.,
2015].
Prior to this study, our team developed a prototype using a class of supervised multivariate
regression, PLS [De Bie et al., 2005; Johnson et al., 2018], chosen because of its characteristic to
perform well with many predictor variables but limited samples. PLS works by projecting to a lower
dimensional space where the covariance between predictor and response variables is maximized, and
more recently, sparse PLS techniques have emerged which can better deal with multivariate responses
46
|
UWA
|
3.3. METHODS
Figure 3.2: Study overall design.
Figure 3.3: Training-set eight marker trajectories. Sidestep left movement type (off right limb),
combined 1,884 predictor samples.
when some of the predictor variables are noisy [Chun and Kele¸s, 2010].
The success by Johnson et al. [2018] of the initial PLS study encourages further investigation
employing deep learning techniques, in particular CNN variants under frameworks such as Caffe
(Convolutional Architecture for Fast Feature Embedding), TensorFlow and Torch [Abadi et al., 2016;
Collobert et al., 2011; Jia et al., 2014]. The origins of many current deep learning approaches can be
traced to recent ImageNet Large Scale Visual Recognition Challenges, and hence their strength in
computer vision feature classification [Ichinose et al., 2017; Krizhevsky et al., 2012]. Despite the fact
motion capture is, by definition, spatio-temporal in nature, fine-tuning of existing image-based models
such as AlexNet, CaffeNet, and GoogLeNet provides an opportunity to leverage training at scale, only
needing to re-train selected higher-level components [Szegedy et al., 2014].
The contribution of this study is to investigate if pre-trained CNN models can be transferred
to improve 3D GRF/M predictions beyond what was achieved by PLS, using a subset of marker
trajectories extracted from legacy motion capture sidestepping trials. The accuracy and validity of the
approach was assessed by comparing mean alignment between GRF/Ms derived from ground truth
force plate data against those predicted by a series of new models, ranked by CNN method. It was
hypothesized that fine-tuning of a pre-trained CNN model would outperform previous PLS accuracy
results, particularly in the case of GRMs which demonstrate greater noise and non-linearity.
3.3 Methods
3.3.1 Design & setup
This study was made possible by access to the motion capture trial archive of The University of
Western Australia (UWA), in which data capture sessions were carried out in multiple laboratories over
a 17–year period from 2001–2017 (Figure 3.2). No new data capture was undertaken. Most research in
47
|
UWA
|
3.3. METHODS
the laboratory is sport-based, therefore all data included in this study was drawn from a young healthy
athletic population aged between 16–35 years. Over this period, the number and type of infrared video
cameras have changed (12–20 Vicon type MCam2, MX13, and T40S cameras; Oxford Metrics, Oxford,
UK), as has the software to drive them (Workstation v4.6 to Nexus v2.5). However, selecting force
platforms by specific types, as defined by the ‘coordinate 3D’ c3d file format (Motion Lab Systems,
Baton Rouge, LA), meant the imported analog GRF/M data structures remained consistent. The force
platforms in these laboratories are subject to annual calibration testing [Collins et al., 2009], therefore
the source ground truth data was expected to contain a systematic error of the order GRF 1.4 % and
GRM 1.6 %. The customized full-body UWA marker set has also evolved during this time to range
from 24–67 markers. Pilot research revealed that a subset of only eight of these passive retro-reflective
markers (C7; sacrum SACR; plus hallux MT1, calcaneus CAL, and lateral ankle malleolus LMAL of
each foot) were required to maximize trial inclusion, relevance to the sidestep movement input, and
GRF/M data output (Figure 3.1) [Besier et al., 2003].
3.3.2 Data preparation
The total c3d file archive at the time of this study contained 433,186 trials. Under UWA ethics
approval RA/4/1/8415, processing of this data was conducted using MATLAB R2017b (MathWorks,
Natick, MA) in conjunction with the Biomechanical ToolKit 0.3 [Barre and Armand, 2014], Python 2.7
(Python Software Foundation, Beaverton, OR) and R 3.4.3 [R Core Team, 2016], running on Ubuntu
v16.04 (Canonical, London, UK). Desktop PC hardware included a Core i7 4GHz CPU, 32GB RAM,
and NVIDIA TITAN X GPU (NVIDIA Corporation, Santa Clara, CA).
The data preparation phase was designed to avoid gaps or errors from the motion capture marker
trajectories and force plate analog channels contaminating CNN model training [Collins et al., 2009;
Merriaux et al., 2017]. The eight nominated marker trajectories were required to be contiguous and
labeled (Figures 3.3 & fig tbme supp), and force plate channel data fully present, both for one complete
stance phase that was defined by foot-strike (FS) to toe-off (TO) of the right stance limb. FS and TO
were automatically determined in accordance with previously published biomechanical methods [Milner
and Paquette, 2015; O’Connor et al., 2007; Tirosh and Sparrow, 2003] (calibrated F rising above
z
20 N for 0.025 s, and subsequently falling below 10 N), which facilitated stance phase normalization
by cubic spline interpolation. The PLS prototype was used to inform specific inclusion time-bases.
Consequently, trajectory data was scaled from minus 66 % before FS to TO (125 samples) and force
plate data minus 16 % before FS to TO (700 samples). Despite its smaller time-base, the larger number
of force plate scaled samples reflected its higher relative data capture frequency compared with marker
trajectories. Experimentally derived filters were used to eject duplicate and invalid capture samples
(based upon marker trajectories z, x for relevant calcaneus markers; GRF/Ms F and M ), regardless
z z
of earlier filtering (raw or processed analog outputs), whether the movement was planned or unplanned,
FS technique, stepping crossover or regular. Movement trials passing all tests were admitted to the
data-set using predictor X and response y array formats typical of multivariate regression [Chun and
Kele¸s, 2010]. For information only, the final proportions of data-set participants were male 59.1 %,
female 40.9 %, height 1.77 ± 0.098 m, and mass 73.9 ± 14.5 kg.
48
|
UWA
|
3.3. METHODS
3.3.3 Feature engineering & model training
A number of CNN regression models were trained from a random shuffle and split of the data-set using
a single fold (training:test). An 80:20 split was used for all methods except GoogLeNet for which a
90:10 ratio was required due to resource limits. Within time constraints, but to check for overfitting
[Domingos, 2012], one model was tested over 5-folds whereby each member of the data-set was a
participant of each of the five test-sets only once. The single movement type sidestep left was selected,
both for its task complexity (i.e. non-planar) but also due to its relevance to knee injury. The total
number of sidestep left trials from the archive which were successfully admitted to the data-set was
2,355, which translated to a training-set of 1,884 and a test-set of 471 samples. The large drop-off
from the original archive was dominated by incomplete marker labels (85.5 %).
A prototype had earlier been developed using a number of PLS variants, in which R-spls Sparse
SIMPLS was demonstrated as the strongest PLS method for this data and is included here for
comparison [Chun and Kele¸s, 2010; Johnson et al., 2018].
CNN network learning speed is increased when the data structures follow a funnel topology, with
more input predictor features than those of the output response. Due to its analog technology, and
highersensitivityrequirements,forceplatedataiscapturedatahigherfrequencythanmotiontrajectory
data which resulted in more output response features than predictor inputs. Therefore, a variety of
dimensionality reduction protocols were assessed including Principal Component Analysis (PCA), Fast
Fourier Transform (FFT), and PLS [Heideman et al., 1984; Pearson, 1901]. PCA with tuning threshold
t 0.999 was selected for its compression accuracy with this data type, which for example with CaffeNet
translated to 113 internal components and a ceiling of r(F ) 0.9997 and r(M ) 0.9979.
mean mean
TobepresentedtotheCNNforfine-tuning,thetraining-setinputmarkertrajectorieswereconverted
from their native spatio-temporal state to a corresponding set of static images [Du et al., 2015; Ke
et al., 2017] (Figure 3.4). The coordinates of each marker (x, y, z) were mapped to the image additive
color model (R, G, B), the eight markers to the image width, and the 125 samples to the image height.
CaffeNet for example requires input images to have dimensions 227 × 227 pixels, and the resultant
8 × 125 image was warped to suit the particular CNN model using cubic spline interpolation. By
freezing space and time, fine-tuning could be achieved from CNN models pre-trained on image data
at scale, which means the new models were able to learn despite the relatively low sample sizes and
withoutthecomplexityofLongShort-TermMemory(LSTM)layers. Fine-tuningnetworkswereselected
based on their performance and proximity of original training to the current investigation, therefore
those trained on the ImageNet classification problem such as AlexNet, CaffeNet and GoogLeNet were
apparent candidates (e.g. CaffeNet having been trained on 1.3 million ImageNet images and 1,000
object classifications). Because of its deeper layers relative to available hardware resources, it was
necessary to throttle GoogLeNet via smaller batch sizes, and a 90:10 training:test split, processing
limitations which prohibited investigation of additional dense networks. Waveform output (rather than
the more common object classification) was achieved by replacing the last SoftMax loss with Euclidean
loss thereby turning the classifier into a multivariate regression network [MathWorks, 2018; Niu et al.,
2016]. All fine-tuning was carried out with the network output reduced via PCA. Additionally, for
CaffeNet, the training-set output GRF/M data was deinterlaced into its component waveforms (F ,
x
F , F , M , M , M ), and the training process executed six times.
y z x y z
50
|
UWA
|
3.4. RESULTS & DISCUSSION
3.4 Results & discussion
The validity of this approach was tested by comparing the difference in accuracy between the ground
truth 3D GRF/Ms recorded from the force plate with those predicted by each CNN regression model.
To allow comparisons with the literature, two methods of agreement were used, that of correlation
coefficient r and relative root mean squared error rRMSE [Ren et al., 2008]. The mean of the average
correlations (or errors) for F , F , and F , F and similarly for M , M , and M , M allowed
x y z mean x y z mean
for ranking of CNN methods using pairs of numbers (Table 3.1).
The prototype using R-spls Sparse SIMPLS [Johnson et al., 2018] had achieved an average of
r(F ) 0.9772 and r(M ) 0.9213 (rRMSE 6.88 and 12.79 %). The closest maximum in the
mean mean
literature using a comparable approach is that of Oh et al. [2013] who reported r(F ) 0.9647 and
mean
r(M ) 0.8987 using a single hidden layer neural network and 48 samples, but for a more simple
mean
movement (gait analysis).
Fine-tuning of the selected pre-trained CNN
models was investigated, first by training from
six output vectors presented as interlaced wave-
forms and the single resultant array reduced by
PCA. Here, CaffeNet proved the most success-
ful with r(F ) 0.9873 and r(M ) 0.9704
mean mean
(shown bolded in Table 3.1, rRMSE 4.60 and
7.73%), withAlexNetincloseproximityr(F )
mean
0.9864 and r(M ) 0.9685 (rRMSE 4.78, 8.11
mean
%). This ranking was expected given CaffeNet
was designed to improve on AlexNet (by its re-
versal of the pooling and normalization layers).
GoogLeNet came third in this analysis r(F )
mean
Figure 3.5: Learning curve (training
0.9733, r(M ) 0.9367 (rRMSE 7.77, 11.70
mean
loss). CaffeNet, 6-way deinterlaced, six models F ,
%), however possibly unthrottled with additional x
F , F , M , M , and M .
y z x y z
GPU resources, it may return improved results on
those presented here. All the fine-tune models achieved significant improvement on the ground reaction
moments r(M ) when compared with R-spls Sparse SIMPLS (CaffeNet +5.4 %), illustrating the
mean
characteristic of CNNs to achieve stronger relationships in data with greater noise and non-linearity
compared with PLS and thus supporting the hypothesis.
The CaffeNet network was further analyzed. Deinterlacing the six output GRF/M waveforms and
running PCA (with the same threshold t) and models for six iterations, resulted in further improved
average correlations r(F ) 0.9880 and r(M ) 0.9715 (shown bolded, rRMSE 4.31 and 7.04
mean mean
%). Each model was run for 5,000 iterations although the learning curve illustrates the majority of
training loss is removed by 2,500 iterations (Figure 3.5). The training-set, test-set mean, and test-set
sample with the corresponding maximum r(F ) are shown (Figure 3.6). Finally, the CaffeNet PCA
mean
deinterlaced method was cross-validated over five k-folds. The similarity of the 5-fold results r(F )
mean
0.9849, r(M ) 0.9720 (rRMSE 4.65, 7.00 %) with the earlier single-fold experiment indicated
mean
overfitting had been avoided.
51
|
UWA
|
3.5. CONCLUSIONS
Figure 3.6: Ground truth versus predicted response. Left, training-set force plate output F ,
x
F , F , M , M , and M , 1,884 samples versus stance phase. Middle and right, test-set ground truth
y z x y z
GRF/Ms (blue, ticks) and predicted response (red), using CaffeNet, 6-way deinterlaced. Middle,
min/max range and mean predicted response; right, individual sample with the strongest r(F ).
mean
A limitation of this approach is that despite data preparation best efforts, the method relied on the
integrity of the original motion capture and force plate calibration and data capture in order to train an
accurate CNN model. In all models investigated, PLS included, the strongest agreement with ground
truth was demonstrated in the vertical (F ) explained by the influence of mass and corresponding
z
greater deviation on which to associate.
3.5 Conclusions
Using the Caffe deep learning framework, multivariate regression of marker-based motion capture to
3D ground reaction forces and moments was compared via five CNN pre-trained models, each tested
with sidestep left, this movement type being the most prevalent in the investigation’s 17–year data
archive. Tactical feature engineering techniques were used to compress spatio-temporal data and thus
allow fine-tuning from three pre-trained CNNs (CaffeNet, AlexNet, and GoogLeNet). By leveraging
the big data of the ImageNet database, the authors were able to deliver ground-breaking results from
only 2,355 original data capture trials. This should encourage researchers who may think they do not
have enough data for deep learning. For this movement-to-GRF/M application, this magnitude of
trials could readily be captured with one sports team over a single training season.
The CaffeNet pre-trained CNN model, using dimensionality reduced and deinterlaced outputs,
53
|
UWA
|
3.6. ACKNOWLEDGMENTS
achievedthestrongestaveragecorrelationtogroundtruthGRF/Mswithr(F )0.9881andr(M )
mean mean
0.9715 (rRMSE 4.31 and 7.04 %). The success of using CNN models to predict GRF/M output for
a dynamically complex task advances the project beyond the earlier limitations of PLS and already
offers the possibility of use-cases for biomechanical analysis where a force plate is undesirable (or
unavailable). Driving the CNN model using marker-based motion capture is simply a necessary first
step. Subsequent research using kinematics derived from wearable sensors (or even 2D video) will
finally unlock the potential to liberate biomechanists from laboratory constraints and make accurate
multidimensional in-game player analyses a reality.
3.6 Acknowledgments
This project was partially supported by the ARC Discovery Grant DP160101458 and an Australian
Government Research Training Program Scholarship. We gratefully acknowledge NVIDIA Corporation
for the GPU provided through its Hardware Grant Program, and Eigenvector Research for the loan
licence of PLS Toolbox. Portions of data included in this study have been funded by NHMRC grant
400937.
3.7 References
M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,
and M. Devin. TensorFlow: Large-scale machine learning on heterogeneous distributed systems.
arXiv:1603.04467, 2016.
A. Barre and S. Armand. Biomechanical ToolKit: Open-source framework to visualize and process
biomechanical data. Computer Methods and Programs in Biomedicine, 114(1):80–87, 2014. ISSN
0169-2607.
T. F. Besier, D. G. Lloyd, J. L. Cochrane, and T. R. Ackland. External loading of the knee joint during
running and cutting maneuvers. Medicine & Science in Sports & Exercise, 33(7):1168–1175, 2001.
ISSN 0195-9131.
T. F. Besier, D. L. Sturnieks, J. A. Alderson, and D. G. Lloyd. Repeatability of gait data using a
functional hip joint centre and a mean helical knee axis. Journal of Biomechanics, 36(8):1159–1168,
2003. ISSN 0021-9290.
G. T. Burns, J. D. Zendler, and R. F. Zernicke. Wireless insoles to measure ground reaction forces:
Step-by-step validity in hopping, walking, and running. ISBS Proceedings Archive, 35(1):295–298,
2017.
V. Camomilla, E. Bergamini, S. Fantozzi, and G. Vannozzi. Trends supporting the in-field use of
wearable inertial sensors for sport performance evaluation: A systematic review. Sensors, 18(3):
873–922, 2018.
N. Chen, S. Urban, C. Osendorfer, J. Bayer, and P. Van Der Smagt. Estimating finger grip force from
an image of the hand using convolutional neural networks and gaussian processes. In 2014 IEEE
54
|
UWA
|
3.7. REFERENCES
C. M. O’Connor, S. K. Thorpe, M. J. O’Malley, and C. L. Vaughan. Automatic detection of gait events
using kinematic data. Gait & Posture, 25(3):469–474, 2007. ISSN 0966-6362.
S. E. Oh, A. Choi, and J. H. Mun. Prediction of ground reaction forces during gait based on kinematics
and a neural network model. Journal of Biomechanics, 46(14):2372–2380, 2013. ISSN 0021-9290.
K. Pearson. On lines and planes of closest fit to systems of points in space. The London, Edinburgh,
and Dublin Philosophical Magazine and Journal of Science, 2(11):559–572, 1901. ISSN 1941-5982.
A. Pouliot-Laforte, L. Veilleux, F. Rauch, and M. Lemay. Validity of an accelerometer as a vertical
ground reaction force measuring device in healthy children and adolescents and in children
and adolescents with osteogenesis imperfecta type i. Journal of Musculoskeletal and Neuronal
Interactions, 14(2):155–161, 2014.
R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical
Computing, Vienna, Austria, 2016. URL www.R-project.org.
L. Ren, R. K. Jones, and D. Howard. Whole body inverse dynamics over a complete gait cycle based
only on measured kinematics. Journal of Biomechanics, 41(12):2750–2759, 2008. ISSN 0021-9290.
C. Richter, E. King, E. Falvey, and A. Franklyn-Miller. Supervised learning techniques and their ability
to classify a change of direction task strategy using kinematic and kinetic features. Journal of
Biomechanics, 66:1–9, 2018). ISSN 0021-9290.
Y. Shimokochi and S. J. Shultz. Mechanisms of non-contact anterior cruciate ligament injury. Journal
of Athletic Training, 43(4):396–408, 2008. ISSN 1062-6050.
T. Sim, H. Kwon, S. E. Oh, S.-B. Joo, A. Choi, H. M. Heo, K. Kim, and J. H. Mun. Predicting
complete ground reaction forces and moments during gait with insole plantar pressure information
using a wavelet neural network. Journal of Biomechanical Engineering, 137(9):091001:1–9, 2015.
ISSN 0148-0731.
H. Soo Park and J. Shi. Force from motion: Decoding physical sensation in a first person video.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
3834–3842, 2016.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and
A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, pages 1–9, 2014.
O. Tirosh and W. Sparrow. Identifying heel contact and toe-off using forceplate thresholds with a
range of digital-filter cutoff frequencies. Journal of Applied Biomechanics, 19(2):178–184, 2003.
ISSN 1065-8483.
X. Wei and J. Chai. Videomocap: Modeling physically realistic human motion from monocular video
sequences. In ACM Transactions on Graphics (TOG), volume 29, page 42. ACM, 2010. ISBN
1450302106.
57
|
UWA
|
On-field player workload exposure and
knee injury risk monitoring via deep
learning
4.1 Abstract
In sports analytics, an understanding of accurate on-field 3D knee joint moments (KJM) could provide
an early warning system for athlete workload exposure and knee injury risk. Traditionally, this analysis
has relied on captive laboratory force plates and associated downstream biomechanical modeling, and
many researchers have approached the problem of portability by extrapolating models built on linear
statistics. An alternative approach would be to capitalize on recent advances in deep learning. In
this study, using the pre-trained CaffeNet convolutional neural network (CNN) model, multivariate
regression of marker-based motion capture to 3D KJM for three sports-related movement types were
compared. The strongest overall mean correlation to source modeling of 0.8895 was achieved over the
initial 33 % of stance phase for sidestepping. The accuracy of these mean predictions of the three
critical KJM associated with anterior cruciate ligament (ACL) injury demonstrate the feasibility of
on-field knee injury assessment using deep learning in lieu of laboratory embedded force plates. This
multidisciplinary research approach significantly advances machine representation of real-world physical
models with practical application for both community and professional level athletes.
Keywords: Biomechanics · Wearable sensors · Computer vision · Motion capture · Sports
analytics.
4.2 Introduction
It is currently not possible to accurately estimate joint loads on the sporting field as the process
to estimate these forces and moments generally requires high-fidelity multidimensional force plate
inputs and complex biomechanical modeling procedures, traditionally available only in biomechanics
laboratories. The sports biomechanist must instead trade the ecological validity of field-based data
captureandmanagethelimitationsoftheartificiallaboratoryenvironmenttoaccuratelymodelinternal
and external musculoskeletal loads [Chiari et al., 2005; Elliott and Alderson, 2007].
One of the most devastating sport injuries is the rupture of the anterior cruciate ligament (ACL)
which can be a season or career-ending event for the professional athlete [Dallalana et al., 2007].
60
|
UWA
|
4.2. INTRODUCTION
Most ACL incidents in team sports such as basketball and hockey are non-contact events (51 to
80 %), with more than 80 % reportedly occurring during sidestepping or single-leg landing maneuvers
[Donnelly et al., 2016; Shimokochi and Shultz, 2008]. These statistics highlight that the ACL injury
mechanism should be generally regarded as an excessive load-related event, resulting specifically from
an individual’s neuromuscular strategy and associated motion, which by definition is preventable.
Alongside technique factors, studies have identified increased knee joint moments (KJM), specifically
high external knee abduction moments during unplanned sidestepping tasks, as a strong indicator of
ACL injury risk [Besier et al., 2001; Dempsey et al., 2007], and as such, real time accurate estimates of
ground reaction forces and moments (GRF/M) and knee joint loads could be used as an early warning
system to prevent on-field non-contact knee trauma.
The orthodox approach to calculating joint loads requires directly recording forces applied to
the athlete by either (1) modifying the laboratory environment to better mimic the specific sport
requirements, or (2) instrumenting the athlete directly with force transducers or other surrogate
wearable sensors to estimate these forces. In soccer, Jones et al. [2009] brought the field into the
laboratory by mounting turf on the surface of the force plate. Conversely, Yanai et al. [2017] employed
thereverse approach by embeddingforce platesdirectly into thebaseball pitching mound. Alternatively,
two main types of wearable sensor technologies have also been used to estimate GRF/M, first, in-shoe
pressure sensors [Burns et al., 2017; Liu et al., 2010], and more recently, body mounted inertial sensors
[Karatsidis et al., 2016; Pouliot-Laforte et al., 2014; Wundersitz et al., 2013]. Unfortunately, the
accuracy of these methods is restricted to simple gait motion (e.g. walking), whereby they estimate only
a single force component (primarily vertical F ), or the sensor itself (location or added mass) adversely
z
affects performance [Burns et al., 2017; Karatsidis et al., 2016; Liu et al., 2010; Pouliot-Laforte et al.,
2014; Wundersitz et al., 2013]. The current generation of wearable sensors are limited by low-fidelity,
low resolution, or uni-dimensional data analysis (e.g. velocity) based on gross assumptions of linear
regression, which overfit to a simple movement pattern or participant cohort [Camomilla et al., 2018],
however, researchers have reported success deriving kinematics from these devices for movement
classification [Pham et al., 2018; Watari et al., 2018]. To improve on these methods, a number of
research teams have sought to leverage computer vision and data science techniques, and while initial
results appear promising, to date they lack validation to ground truth data, or relevance to specific
sporting related tasks [Chen et al., 2014; Soo Park and Shi, 2016; Wei and Chai, 2010]. For example,
Fluit et al. [2014] and Yang et al. [2013] derive GRF/M from motion capture. However, the former
requires the complexity of a full body musculoskeletal model, while the latter again predicts only F .
z
These examples of data science and machine learning solutions have relied on basic neural networks and
small sample sizes, which means perhaps the biggest untapped opportunity for biomechanics research
and practical application is to approach these problems by building on the success of more recent deep
learning techniques, which are better suited to exploit large amounts of historical biomechanical data
[Aljaaf et al., 2016; Choi et al., 2013; Oh et al., 2013; Richter et al., 2018; Sim et al., 2015].
In the biomechanics laboratory, retro-reflective motion capture is considered the gold standard in
marker-basedmotionanalysis, utilizinghigh-speedvideocameras(upto2,000Hz)withbuilt-instrobes
of infrared (IR) light to illuminate small spherical retro-reflective passive markers attached to the body
[Chiari et al., 2005; Lloyd et al., 2000]. Often captured concurrently with the motion data are analog
61
|
UWA
|
4.2. INTRODUCTION
outputs from force plates providing synchronized GRF/M. The three orthogonal force and moment
components recorded are: horizontal (shear) forces F and F , the vertical force F , and the three
x y z
rotation moments M , M and M about the corresponding force axes. Force plates can be affected by
x y z
a variety of systematic errors and installation must be carried out in such a manner as to minimize
vibration, and with regard to the frequency and absolute force magnitude of the captured movement.
This means that mounting the plate flush with a concrete floor pad during laboratory construction
produces optimal force recordings. However, this makes the force plate difficult to move or install
in outdoor sporting environments, and errors can also be propagated from failures in maintenance,
calibration, and operation [Collins et al., 2009; Psycharakis and Miller, 2006].
Motion and force plate data are conventionally used as inputs to derive the corresponding joint
forces and moments via inverse dynamics analysis [Manal and Buchanan, 2002; Yamaguchi and Zajac,
1989]. Over the past twenty years at The University of Western Australia (UWA), upper and lower
body biomechanical models have been developed in the scripting language BodyBuilder (Oxford
Metrics, Oxford, UK), with the aim of providing repeatable kinematic and kinetic data outputs (e.g.
KJM) between the three on campus biomechanics laboratories and external research partners [Besier
et al., 2003; Chin et al., 2010]. This paper aims to leverage this legacy UWA data collection by using
non-linear data science techniques to accurately predict KJM directly from motion capture alone.
Deep learning is a branch of machine learning based on the neural network model of the human
brain and which uses a number of hidden internal layers [LeCun et al., 2015]. Enabled by recent
increases in computing power, the technique has gained popularity as a powerful new tool in computer
vision and natural language processing, and one potentially well-suited to the 3D time-based data
structures found in biomechanics [Krizhevsky et al., 2012]. Caffe (Convolutional Architecture for Fast
Feature Embedding), maintained by Berkeley AI Research (BAIR), is one of a growing number of
open-source deep learning frameworks [Jia et al., 2014], alongside others including TensorFlow and
Torch [Abadi et al., 2016; Collobert et al., 2011]. Caffe originated from the ImageNet Large Scale Visual
Recognition Challenge (2012), is optimized for both CPU and GPU operation, and allows models to be
constructed from a library of modules, including convolution (convolutional neural network, CNN) and
pooling layers, which facilitates a variety of deep learning approaches [Ichinose et al., 2017; Krizhevsky
et al., 2012]. Training a deep learning model from scratch can require a large number of data samples,
processing power and time. Fine-tuning (transfer learning) is a technique commonly employed to take
an existing related model and only re-train certain higher level components, thus needing relatively less
data, time and computational resources. Pre-trained models such as AlexNet, CaffeNet and GoogLeNet
are selected according to their relevance to the higher-level data-set. CaffeNet, for example, was trained
on 1.3 million ImageNet images and 1,000 object classes [Szegedy et al., 2015].
Contrary to the traditionally isolated data capture methods in the sport sciences, what made this
investigation possible was access to the UWA data archive. Using this pooled historical data, the aim
of the study was to accurately predict extension/flexion, abduction/adduction and internal/external
rotation KJM from marker-based motion capture. Although this would negate the requirement for
embedded force plates and the inverse dynamics modeling process, it is still tied to the laboratory.
However, if successful this work would provide the necessary information to facilitate the next phase of
the project, which is to drive multivariate regression models (not just classification) from low-fidelity
62
|
UWA
|
4.3. METHODS
Figure 4.1: Study overall design.
wearable sensor input, trained from high-fidelity laboratory data, for eventual outdoor use. It was
hypothesized that by mimicking the physics behind inverse dynamics the strongest correlations would
be achieved via the double-cascade technique from CaffeNet models which had been pre-trained in the
relationship between marker-based motion capture and GRF/M.
4.3 Methods
4.3.1 Design & setup
The data used in this new study was captured in
multiple biomechanics laboratories over a 17–year
period from 2001–2017 (the overall design of the
study is shown in Figure 4.1). The primary UWA
biomechanics laboratory was a controlled space
which utilized lights and wall paint with reduced
IR properties. Over this period, the floor surface
coverings have varied from short-pile wool carpet
squares to artificial turf laid on, and around, the
force plate surface. While not directly tested, all
selected surface coverings during this time had
negligibleunderlayorcushioningcomponentinan
effort to minimize any dampening characteristics.
It is important to note that the variety of surfaces
were spread amongst both training and test sets,
enablingtheprototypetoproceedwithoutsurface
Figure 4.2: Laboratory motion and force
covering calibration. However, further calibration
plate data capture overlay. The eight labeled
of the model would be required for future outdoor markers used are shown artificially colored and en-
use with variant surfaces (e.g. grass). Trials were larged, and the force plate highlighted blue. An
internal knee adduction joint moment is depicted.
collectedfromayounghealthyathleticpopulation
(male and female, amateur to professional), and with pathological or clinical cohort samples excluded.
Sample trials were collected using 12–20 cameras (Vicon model types MCam2, MX13 and T40S;
Oxford Metrics, Oxford, UK) mounted on tripods and wall-brackets that were aimed at the desired
3D reconstruction volume. AMTI force plates were used to record the six vector GRF/M (Advanced
Mechanical Technology Inc., Watertown, MA). Equipment setup and calibration was conducted to
63
|
UWA
|
4.3. METHODS
manufacturer specifications using the proprietary software at the time of collection (Workstation v4.6
to Nexus v2.5), with motion and analog data output in the public-domain ‘coordinate 3D’ c3d binary
file format (maintained by Motion Lab Systems, Baton Rouge, LA). Since the full UWA marker set
has evolved to comprise between 24–67 markers, a subset of eight passive retro-reflective markers
(cervical vertebra C7; sacrum SACR; plus bilateral hallux MT1, calcaneus CAL, and lateral ankle
malleolus LMAL) were selected for the present investigation to maximize trial inclusion, relevance to
the movement type, and downstream KJM output (Figure 4.2) [Besier et al., 2003; Dempsey et al.,
2007; Johnson et al., 2018].
4.3.2 Data preparation
Data mining the archive of 458,372 motion capture files was approved by the UWA human research
ethics committee (RA/4/1/8415) and no new data capture was undertaken. The data-set contributions
by sex, height, and mass were male 62.8 %, female 37.2 %, height 1.766 ± 0.097 m, and mass
74.5 ± 12.2 kg respectively; and the depersonalized nature of the samples meant participant-trial
membership (the number of participants in relation to the number of trials) was not considered. Data
processing was conducted using MATLAB R2017b (MathWorks, Natick, MA) in conjunction with
the Biomechanical ToolKit 0.3 [Barre and Armand, 2014], Python 2.7 (Python Software Foundation,
Beaverton, OR) and R 3.4.3 [R Core Team, 2016], running on Ubuntu v16.04 (Canonical, London, UK).
Hardware used was a desktop PC, Core i7 4GHz CPU, with 32GB RAM and NVIDIA multi-GPU
configuration (TITAN X & TITAN Xp; NVIDIA Corporation, Santa Clara, CA).
The preparation phase was necessary to ensure the integrity of the marker trajectories and force
plate analog channels, and to limit the impact of manual errors that may have propagated through
the original data capture pipeline [Merriaux et al., 2017; Psycharakis and Miller, 2006]. First, the
data mining relied on trials with the eight required marker trajectories being contiguous and labeled,
together with associated KJM, and force plate channel data present throughout the entire stance phase.
To validate the model by comparing calculated and predicted KJM over a standardized stance phase,
it was necessary for both the training and test sets to include gait event information defined from a
common source, which in this instance was the force plate. This requirement would not extend to
real-world use, where it is envisaged that event data would be predicted from a continuous stream
of input kinematics from wearable sensors located on the feet [Falbriard et al., 2018]. For this study,
foot-strike (FS) was automatically detected by foot position being within the force plate corners, and
if calibrated F was continuously above the threshold (20 N) for a defined period (0.025 s); toe-off
z
(TO) by F falling below a second threshold (10 N) [Milner and Paquette, 2015; O’Connor et al.,
z
2007; Tirosh and Sparrow, 2003]. As the eight chosen markers did not include any on the shank, the
traditional kinematic definitions of heel strike and forefoot strike were unavailable. A custom approach
was adopted to demonstrate the spread and variety of different foot segment orientations at foot-strike.
This ‘sagittal foot orientation’ was reported by comparing the vertical height of forefoot and primary
rearfoot markers in the phase defined by FS and the mean 25–30 % of stance (within ± 1 % tolerance),
to categorize general foot orientation at contact as heel down (HD), flat (FL), or toe down (TD)
(Table 4.1). The determination of sagittal foot orientation was used to illustrate the variety of running
patterns present in the data, and no judgment of model performance according to foot orientation was
64
|
UWA
|
4.3. METHODS
made.
Cubic spline interpolation was used to time-normalize the marker trajectories from FS minus 66 %
of stance (total 125 samples, typically 0.5 sec at 250 Hz) and force plate channels from FS minus 16 %
of stance (700 samples, correspondingly 0.35 sec at 2,000 Hz), both until TO. Normalization allowed
the model to be agnostic to the original motion capture and force plate data capture frequencies.
Furthermore, our earlier study [Johnson et al., 2018] demonstrated the importance of an additional
lead-in period for marker trajectories, and the use of different sample sizes to reflect the relative ratio
of the original capture frequencies. Duplicate and invalid capture samples were automatically removed,
with no regard for earlier filtering, and in the case of sidestepping, whether it was performed in a
planned or unplanned manner. If the motion capture and force plate data survived these hygiene tests
it was reassembled into the data-set arrays X (predictor samples × input features) and y (response
samples × output features) typical of the format used by multivariate regression [Chun and Kele¸s,
2010].
Kinematictemplateswereusedtocategorizethethreemovementtypes, selectedfortheirprogressive
complexity and relevance in sports: walking, running, and sidestepping (opposite foot to direction of
travel); the threshold for walking to running used was 2.16 m/s [Segers et al., 2007]. A small proportion
of sidestepping trials used crossover technique and these were removed to avoid contaminating the
model. KJM were considered only for the stance limb, and the majority of participants reported being
right leg dominant (Table 4.1).
4.3.3 Feature engineering & model training
The Caffe deep learning framework was used to fine-tune the native CaffeNet model (Figure 4.3).
However, to be presented to CaffeNet and benefit from the model’s pre-training on the large-scale
ImageNet database, the study training-set input marker trajectories needed to be converted from
their native spatio-temporal state to a set of static color images [Du et al., 2015; Ke et al., 2017]
(Figures 4.4 & 4.5). This was achieved by mapping the coordinates of each marker (x, y, z) to the
image additive color model (R, G, B), the eight markers to the image width, and the 125 samples to
the image height. The resultant 8 × 125 pixel image was warped to 227 × 227 pixels to suit CaffeNet
using cubic spline interpolation (input: predictor samples × 227 × 227).
A common technique in CNN architecture to improve performance is to minimize the number of
output features. The training-set output KJM data was deinterlaced into its six component waveforms
Table 4.1: Data-set characteristics, sagittal foot orientation, by movement type and
stance limb.
Movement Stance Data-set Sagittalfootorientation
type limb samples Heeldown(HD) Flat(FL) Toedown(TD)
Walk L 570 570(100.0%) 0(0.0%) 0(0.0%)
Walk R 646 646(100.0%) 0(0.0%) 0(0.0%)
Run L 233 209(89.7%) 23(9.9%) 1(0.4%)
Run R 884 811(91.7%) 71(8.0%) 2(0.2%)
Sidestep L 566 457(80.7%) 88(15.5%) 21(3.7%)
Sidestep R 1527 1162(76.1%) 325(21.3%) 40(2.6%)
65
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.