University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Virginia Tech | Chapter 4 - Site Selection Criteria
Figure 4.6: UCG application on multiple seams
4.3.3 Depth of Coal Seam
UCG trials have been conducted at varying depths; for example in the Russian and the U.S.
experiments varied from 30 m to 350 m deep, whereas in Europe, trials at much deeper
depth (600-1200m) were carried out [Shafirovich and Varma 2009]. However, the practical
depth at which UCG can be applied effectively is a function of hydrostatic pressure in the
reactor cavity, potential for subsidence and depth of potable aquifers in the region. Burton
et al. recommend 12 m as minimum required depth of coal seam with preference for coal
seams deeper than 150 m for better control of UCG operations [Burton, Friedmann et al.
2006].
In the shallower seams, the risk of gas leakage and likelihood of intersecting potable
aquifers increases. The burn cavity at shallower depths gives rise to potential for collapse
and subsidence. However, decreased hydrostatic pressure at shallower depths ensures
water inflow into the cavity thus reducing the chances of water contamination by outflow
of contaminants under increased cavity pressures. However, the cavity pressure control at
shallow depths is essential because even a transient increase in burn cavity pressure will
force gas leakage due to minimal hydrostatic pressure [Couch 2009] .
56 |
Virginia Tech | Chapter 4 - Site Selection Criteria
On the other hand, deeper coal seams solve the problem of contaminating potable aquifers,
as most of the aquifers at this depth are already saline and not classified as potable [Couch
2009]. Secondly, due to increased hydrostatic pressure, maintaining a steady condition
between strata and burn cavity pressures is relatively easy. This gives increased process
control. However, the increased pressure at greater depths tends to decrease the
permeability of the coal and the linkage between injection and production wells is difficult
[Sury, Kirton et al. 2004].
Drilling at greater depths may increase operating costs but recent developments in drilling
technology have made it possible to operate at greater depths without facing technical and
operational difficulties. New technologies and design solutions have promoted
development of deep coal seams, increased control over rock pressures, reduction in the
well requirements thus decreasing the drilling costs and application of UCG on abandoned
mines [Zorya, JSC Gazprom et al. 2009]. In addition to increase in hydrostatic pressures and
geo-mechanical stresses, the temperature also increases at greater depths, but this increase
has no known severe impacts on UCG operations.
The depth of different coal seams in central Appalachian region ranges from outcrops to
more than 800 m. The Pocahontas No. 3 coal seam ranges in depth from outcrop along the
northeastern edge of the basin to about 762 m and the depth of Pocahontas No. 4 is similar
to that of No. 3 with No. 4 overlying No. 3 seam roughly 9 to 30 m (30 to 100 ft.) [EPA.
2004]. The Fire Creek/Lower Horsepen varies in depth from152 m (500 ft.) to a maximum
of 457 m (1,500 ft.) whereas the Beckley/War Creek coalbed reaches to a maximum depth
of 610 m (2,000 ft.). The Sewell/Lower Seaboard coalbed is fairly shallow with less than
150 m (~500 ft.) over almost half of its area and the depth of Iaeger/Jawbone Coal Seam is
s4i.m3.i4la rlSye laemss İtnhcalnin 1a5t0io mn (~500 ft.)[EPA. 2004].
Seam inclination or dip is not a restraining factor for UCG site selection criteria [Shafirovich
and Varma 2009]. In Russian and some U.S. trials, steeply dipping seams (> 50°) were used
successfully for UCG. UCG is preferable for exploiting steeply dipping coal seams because
these seams are usually considered less economical and technically difficult for
57 |
Virginia Tech | Chapter 4 - Site Selection Criteria
conventional mining techniques than horizontal seams [Lamb 1977]. Secondly, the process
to gasify steeply dipping coal seam is relatively simple, easier and economically more
attractive than mining the coal [Lamb 1977]. The drilling requirements for steeply dipping
seams are less than those for horizontal beds [Bialecka 2009].
Burton et al. and Sury et al. prefer shallow dipping seams to steeply dipping seams because
of difficulties in process controls and associated difficulties such as chimney formations
and damage to the down dip production well as a result of strata movements [Sury, Kirton
et al. 2004; Burton, Friedmann et al. 2006].
The regional dip of coal bearing strata in the Central Appalachian Basin is to the northwest
at a rate of 75 ft. per mile. Generally, the dip of coal seams in Pocahontas and Lee formation
a4r.3e .g5e nStleea, mus uStarlluyc rtaunrgei n g from 1.2 ft. per 1000 ft. to 1.4 ft. per 1000 ft. [SECARB. 2007].
A hard rock overlaying a coal seam may decrease the risk of subsidence and caving though
it may pose problems in drilling of wells. An impermeable rock covering may provide a
shield preventing gas losses but it may limit the water supply [Couch 2009]. The presence
of joints, faults cleats and slips present in the target seam or other seams and confining
strata may provide potential gas leakage paths [Sury, Kirton et al. 2004]. Similarly
permeable rock matrices, mining/caving induced features, fissures and abandoned
borehole may provide path for fluid inflow and outflow [Sury, Kirton et al. 2004] and can
result in cavity flooding or ground water pollution.
Similarly, if there is a series of seams at different depths in the area, then it is important to
classify them according to their potential to be mined, gasified or methane extraction. The
sequencing in the use of different technologies is very important in this case [Couch 2009].
If the topmost seam is mined first, then usually the lower seams are undisturbed and can
be used for future exploitation, however, it is very common that only few of the seams are
economically minable [Couch 2009]. The mining of lower seams first may result in strata
relaxation, resulting in the production and /or expansion of existing fissures and cracks,
thus providing fluid flow paths.
58 |
Virginia Tech | Chapter 4 - Site Selection Criteria
The major formations in the Central Appalachian Basin are the Pocahontas, New River/Lee
and Kanawha/Norton. The Pocahontas formation consists of massively bedded, medium
grained subgraywacke, which can be locally conglomerated [EPA. 2004]. Gray siltstones
and shales are interbedded with sand stone and coal seams usually make up about two
percent of formation thickness. The New River/Lee formation overlies the Pocahontas
formation conformably in northeastern portions but have an unconformity in the east
central portion [EPA. 2004]. The coalbeds in this formation thins and pinches out towards
the south and west. The Kanawha/Norton formation is composed of irregular, thin to
massively-bedded subgraywackes interbedded with shale and contains over 40 multi-
bedded coalbeds. The Central Appalachian Basin is characterized structurally by broad,
open, northeast-southwest trending folds that typically dip less than five degrees and faults
and folds associated with this 25 mile-wide and 125 mile-long structural feature are more
intense, as evidenced by overturned beds and brecciated zones in some locations. Two
d4o.3m.6in aPnet rjmoineta pbailtitteyr nasn dru Pno wroitshiitny the coals [EPA. 2004].
Permeability of coal plays an important role in the linking of injection and production
wells. High rank coals and deep seated seams generally have low permeability [Couch
2009] and exhibit difficulty in flow path linkages. Permeability also effects the burn cavity
width & gasifier growth and the approach of low permeability zone at production well
indicates the possible end of gasifier life [Creedy, Garner et al. 2001].
Ghose and Paul prefer the development of in seam channels for gasification over long
distances and in their opinion the natural permeability of coal seams is not sufficient to
move the gases to and from the reaction zones [Ghose and Paul 2007]. Ray et al. propose
use of hydraulic fracturing to enhance the natural permeability of coal [Ray, Panigrahi et al.
2010].
The permeability of overlying strata is an important consideration. The permeable rocks
will allow water to inflow into the cavity; they will also allow reaction products to flow into
the strata and can result in pollution or contaminant movement at some distance from the
reaction zone [Creedy, Garner et al. 2001]. However, Sury et al. suggest that due to short
59 |
Virginia Tech | Chapter 4 - Site Selection Criteria
lived gas escapes, the effect of rock matrix permeability of adjacent strata on the gas
leakage is not important except where there are large joints or fissures or very high matrix
permeability [Sury, Kirton et al. 2004].
For the Central Appalachian Basin, Hunt and Steels suggest a minimum permeability of 0.1
to 0.5 md with the Pocahontas No. 3 coalbed having a high average permeability of 5 to 27
md [Lyons 2003]. Hunt and Steels also state that coalbeds in Appalachian Basin are
underpressured due to geological history, extensive coal mining and many oil and gas wells
in the vicinity [Lyons 2003]. As per Mr. Tony Scales (Virginia Department of Mines,
Minerals and Energy), the most permeable layers in the geologic subsurface of Virginia are
coal seams [EPA. 2004]. SECARB reports suggest the following average permeability values
for different fields in the region: Frying Pan field-11md; Sourwood field-10 md; Lick creek
Field 7.5 md; Buck Knob 10 md and South Oakwood field 7.5 md [SECARB. 2011]. As
coalbed methane production matures in these fields over tens of years, the permeability of
t4h.3e .c7o aMl woiilsl tiunrcere Caosen.t ents
The amount of water present in the seam affects the UCG process in two ways; firstly, an
excessive amount of water makes the ignition of seams difficult and inrushes of water
through fissures, faults and joints occasionally putout/quench the fire. On the other hand,
presence of a certain amount of water is helpful once the reaction is started, as it helps in
the water gas reaction [Thompson, Mann et al. 1976]. In the reduction zone of the
2 (g) 2
gasification channel, major reaction takes place when H O and CO react with an
2
incandescent coal seam and reduce to H and CO under high temperature[Yang, Zhang et al.
2008].
2
C + CO 2CO -162.4 MJ/kmol
2 (g) 2
C + H O CO + H -131.5 MJ/kmol
Thus, the presence of water is beneficial to the reaction and it can increase the amount of
hydrogen in syngas composition. Secondly, the presence of water in the seam acts as an
60 |
Virginia Tech | Chapter 4 - Site Selection Criteria
efficient gas seal [Thompson, Mann et al. 1976], helping in the reduction of reaction gases
a4n.3d. 8c onHtaymdrinoagnetosl oesgcya apne do uGtr oofu tnhde bWuarnte cra ivsistuy.e s
The problems at Hoe Creek and in Williams county Wyoming, USA have highlighted the
importance of site characterization especially in relation to the presence of groundwater
resources. These trials not only contaminated local potable aquifer, they created a great
hindrance to future UCG research in the U.S. Primarily, the DOE sponsored these projects
and the migration of organic compounds (e.g., benzene, toluene, ethyl benzene, and xylene)
contaminated a coal seam aquifer located at a depth of about 55 m below the surface
[Burton, Friedmann et al. 2006].
The knowledge gained from the trials at Hoe Creek is valuable and suggests that
downgraded consideration should be given to UCG sites that are surrounded by potable
aquifers. The second important lesson is to maintain the cavity pressures at a level lower
than hydrostatic pressures in order to prompt controlled inrush of water into the cavity
and to avoid outflow of contaminations or reaction gases from the cavity. This requires
maintaining the hydrostatic gradient towards the cavity areas by pumping water from the
cavities to facilitate groundwater inflow towards the gasifier chamber. [Sury, White et al.
2004]
Hydrogeological mapping of the area is very important to avoid such incidents and it
should include detailed information about lithology, fractures (faults, joints, fissures etc.),
folds and aquifer extent & thickness [Creedy, Garner et al. 2001].
The potable water resources in Central Appalachian region are usually at shallow depths;
deeper aquifers are mostly saline. As reported by EPA, water wells are typically 75 to 100
feet deep in the Pennsylvanian aquifer located in the Kentucky portion of the basin and
produce one to five gallons per minute of water. In the Virginia region, the primary aquifer
is the Appalachian Plateau Aquifer, which has wells typically 50 to 200 feet deep and
produce one to 50 gallons of water per minute [EPA. 2004]. In the West Virginia region, the
primary aquifer is the Lower Pennsylvanian aquifer with wells commonly 50 to 300 feet
deep and produce one to 100 gallons per minute [EPA. 2004]. Produced water volumes
61 |
Virginia Tech | Chapter 4 - Site Selection Criteria
from coal seams in the basin are relatively small, typically several barrels or less per day
per well, with total dissolved solids (TDS) greater than 30,000 milligrams per liter (mg/L).
In Virginia, the depth to the base of fresh water is approximately 300 feet and in West
Virginia it is estimated to be between 280 to 730 feet [EPA. 2004]. Thus, deep coal seams
will typically avoid the potable aquifer and may not pose threats to drinking water
s4u.3p.p9l ieQs.u antity of Resources
Quantity of resources is an economic and profitability criterion that is essential for funding
decisions. Three types of resources can be considered for UCG development, developed or
reserve deposits, undeveloped or prospective deposits and deposits in abandoned or
ceased coal mines [Bialecka 2009]. For commercial development of UCG, a resource of
sufficient quantity is required to offset expenses and ensure profitability and long
economic life of the project. Generally, the utilization of syngas determines how much
quantity of coal is required for specific project needs. For example to feed smaller power
generation units serving local needs, a smaller resource may be sufficient, on the other
hand, larger industrial units like chemical plants require large amounts of coal. Shafirovich
et al. state that to feed a 300 MW UCG-based combined cycle power plant with an efficiency
3
of 50% and running for 20 years, 75.6x109 Nm of syngas with a heating value of 5.0
3 6
MJ/m is required. This requires gasification of a coal deposit of about 33x10 metric tons
[Shafirovich, Varma et al. 2009].
The Central Appalachian region has produced more than 17 billion tons of coal with the
peak production in the 1990’s at approximately 275 million tons/year, which dropped to
almost 240 million tons/year since then [Mark 2006]. The recoverable reserves estimated
by EIA on sulfur content basis are approximately 27,000 million tons [Milici and Dennen
2009]. This has been one of the most productive coalfields of the USA and there are
s4u.3ff.i1c0ie Antv areilsaebrvileitsy o of fu İnnmfrinaastbrlue cctouarl efo r development of large-scale UCG operations.
Another major aspect for site selection is the presence of available infrastructures
including roads, electricity, utility lines and gas transmission lines. An ideal location is one
62 |
Virginia Tech | Chapter 4 - Site Selection Criteria
that is close to a major transportation/roads network, has existing gas pipelines in close
vicinity and the land available for commercial/industrial units or power plants that feed
upon product gases.
The Central Appalachian Basin is one of the most productive and mature coalfield in the
U.S. It is a center of coal and coalbed methane production and numerous mines, and CBM
production wells are located throughout the region. Due to maturity of the region,
i4n.3fr.a1s1tr Purcetuseren cise a ovfa Ciloaballeb eind tMhiest ahraenae.
Although effects of the presence of coalbed methane on UCG are not yet extensively known
and very little literature is available in this regard, the general idea is that if methane is not
in a commercially recoverable quantity in the seam, its presence may enhance heating
value of product gas and may aid the burning process. However, if commercially
recoverable quantities of coalbed methane are present , then there is dispute in the
sequence of energy recovery from coal seams [Couch 2009]. The in-seam drilling
techniques established to facilitate methane recovery can be helpful in UCG applications
but is very important to avoid extracting CBM in such a way that subsequent application of
UCG becomes practically impossible [Couch 2009]. Although coalbed methane is a more
mature technology especially in Australia and US, it recovers much less energy [Couch
2009]. As stated by Carbon Energy UCG recovers more than 20 times the energy recovered
by coalbed methane drainage methods [Meany and Maynard 2009]. However, further
research is needed to establish the synergies between UCG and coalbed methane.
The Central Appalachian Basin is one of the most important CBM fields in the U.S.
Production of CBM started in 1988 in the Nora Field in Dickenson County, Virginia, which is
the most productive field, followed by the Oakwood Field in Buchanan County, Virginia.
Since then more than 4600 wells have been drilled in southwest Virginia [Ripepi 2009]. At
the end of 2006, estimated production from Central Appalachian Basin was about 777 Bcf
with Virginia producing 90% of CBM production [Ripepi 2009]. These production wells are
usually hydraulically fractured to enhance CBM recovery; the typical fracture ranges in
length from 300 to 600 feet from the well in either direction, but can extend from 150 feet
63 |
Virginia Tech | Chapter 4 - Site Selection Criteria
to 1500 feet with fracture widths ranging from one eighths inch to almost one and a half
inch [EPA. 2004]. Thus, a thorough research study is needed to establish the synergies
between CBM production development and subsequent application of UCG.
4.4 Chapter Conclusions
Table 4.1 shows parameters in the order of their importance for a proper UCG site
selection.
Table 4.1: Site Selection Criteria
Parameter Requirement
Seam thickness Preferably >1 m, ideally 5 -10 m
Seam depth > 150 m, ideally > 200 m
Sub bituminous or lower rank, ideally non coking, non-
Coal rank / type
swelling coals
Any but steeper is preferred as it may be technically
Seam dip / inclination
difficult to mine through conventional methods
Controlled inflow of water or high moisture contents are
Moisture contents
desirable especially after initiation of burning
Groundwater Avoid potable aquifer and large water bodies
More permeable the seam is, easy to link the injection and
Permeability and
production well, more permeable the strata is more
Porosity
chance of gas leakage and contaminant movement
Avoid excessively fractured, faulted and broken rocks as
Seam/strata structure they may cause water inrush or product gas and
contaminant leakage
Coal quantity Dependent upon gas utilization and profitability
Infrastructure
Roads, electricity and power transmission lines
availability
Depends upon economics or commercial value of CBM
Presence of CBM
deposit and its interoperability with UCG
The Central Appalachian Basin is composed of several coal seams ranging in thickness from
less than one meter to about two to three meters at places. These coal seams are at a
64 |
Virginia Tech | Chapter 4 - Site Selection Criteria
varying depth of a few meters to more than 800 m. The average seam inclination is
normally flat rather than steep. Potable aquifers are found at a depth varying from 25 m to
more than 100 m at some places, but overall drinking water is at shallower depths. Rank of
coal is generally bituminous in appreciable quantity, making the basin a potential site for
UCG targeting any major thick seam or composite of seams with average thickness of more
than 2 m.
This basin has had significant production of CBM with more wells being drilled regularly.
The coal seams and strata have been subjected to hydro fracturing to enhance CBM
recovery. This infrastructure and network of wells can be an important economic benefit
for UCG if they can be utilized as injection and/or production wells for gasification. The
increased fractures can be helpful in linking the injection and production wells but can also
pose problems of cavity control and contaminant migration. This research gap advocates a
strong need for a research study to establish the synergy between CBM and UCG operations
in the basin.
65 |
Virginia Tech | Chapter 5 - GIS Model for Selection of Suitable Sites for UCG
5.1 Introduction
Proper site selection is one of the most important parameters in the success or failure of
(UCG) projects. A properly selected site helps in realizing the full environmental and
economic potential of this technology [Sury, Kirton et al. 2004], whereas a poor site
selection may result in the failure or serious environmental consequences for the project.
This is evident from the pilots in the Hoe Creek and Carbon County, Wyoming, where poor
site selection is attributed as one of the major factors resulting in the contamination of
potable groundwater resources and nearby aquifers [Clean Air Task Force 2009]. This
highlights the significance of the site selection stage for the UCG project.
This chapter describes the development of a GIS model that assists in the selection of
suitable sites for UCG based on the criteria listed in the previous chapter. The model uses
powerful features of two GIS software: ESRI’s ArcGIS [ESRI. 2012] and Clark Labs’ IDRISI
[Clark Labs 2012] and develops a general process flow chart applicable to any site. In this
chapter, all the steps involved in the development and use of this model are explained in
detail. The model is applied to the Frying Pan, Sourwood, Lick Creek and South Oakwood
fields in Virginia; however, this model is not site specific and can be applied to any site
provided the input data is available for that site. This chapter also describes the data
required for this model, different data sources, preparation of data in the required formats
and creation of data layers for use in the software.
5.2 Data Required
This model uses the site selection criteria established in the previous chapter and most of
the data required for the model is based on that criteria. However, the model is flexible
enough to let the users utilize data that is not part of those criteria provided it is in the right
format. For example, cost of lands, labor availability, regional population, vicinity of
schools, colleges, recreational activities and hospitals, other utilities and climatic data of
66 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
5.3 Data Format
For use in the GIS, initial data was mostly in the vector formatted shapefiles of polygons,
lines or points. For example, the boundaries of coalfields were polygon shape files. The
forests, coal isopachs and coal quality parameters were also vector data in the form of
shapefiles. The coal quality data was added as attributes to the attribute table of field
boundaries shapefile. It was later displayed as data layers in ArcGIS and IDRISI. The roads,
railway lines, power lines and streams were lines shapefiles whereas elevation and land
cover were raster dataset. The data imported to ArcGIS was in the shapefiles whereas the
data layers for modeling in IDRISI were in the raster format because raster datasets are
more easily configured in IDRISI.
5.4 Data Sources
Various sources provide vector data in the form of shapefiles for roads, forests, boundaries,
water features, addresses, counties, political boundaries and demographic data. The U.S.
census bureau provides boundaries, roads and water features in the form of Tiger
shapefiles [U.S. Census Bureau 2012]. The demographic and census data is available from
American FactFinder, a website managed by the U.S. Census Bureau [American FactFinder
2012]. The data relating to railroads network is available from Center for Transportation
Analysis’s (CTA) transportation networks website. CTA provides data for the North
American railroad system in the form of downloadable shapefiles [CTA. 2011]. However,
for a specific site, the rail network in the area has to be extracted after displaying in the
appropriate software. The United States Geological Survey (USGS) provides topographic,
elevation, land cover and water resources data [USGS 2011] . The data is available in the
shapefiles, GeoTiff and GeoPdf format. The data can be downloaded for specific areas using
interactive tools of “The National Map Viewer” in the required format [USGS 2012].
The data related to coal rank, seam thickness, seam dip, coal quality (ash, moisture, sulfur,
carbon contents etc.) is generally site specific, and in most cases, the companies interested
in the gasification acquire or generate this data internally.
68 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
5.5 Data Reliability
The data reliability depends upon the source from where the data is extracted. The data
from the USGS websites and TIGER data is generally very accurate; however, the data
available from private vendors, data clearinghouse websites and generally available online
data has varying levels of accuracy. The data generated by the companies for their own use
is typically more accurate than the data disseminated on the web based databases.
For the particular area used in the model, the data was collected from several sources
including federal and state database providers like USGS, Census Bureau, TIGER database
and various online databases, therefore the accuracy of some of the data was not very high.
Secondly, the coal quality data was taken from a study of this basin conducted by SECARB
to determine the potential of carbon sequestration in this area [SECARB. 2011]. Coal
quality and individual seam data was available only for a few boreholes in the area and was
extrapolated for the rest of the area, therefore the coal quality, seam thickness, seam
depths and dip data is accurate for those boreholes only and is not a representative of
entire study area. The data for the entire study area was extrapolated from this available
data. The purpose of this data generation was to demonstrate its use in the model and to
describe the processes through which data can be extracted and used for modeling. Focus
was demonstration of this decision model not the accuracy of the data, therefore the results
are not wholly representative of this area and are not intended for commercial application
in UCG projects. The more accurate site-specific data can be generated based on the
procedures described in this model that will provide highly accurate and dependable
results for the specific sites
5.6 Data Preparation
The data imported into ArcGIS was in the format of vector shapefiles and was in varying
coordinate systems. The data was projected to North American Datum 1927, State Plane
coordinate system for Virginia South in Lambert conformal conic projection, to maintain
the consistency of data and visibility of layers. Roads, forest and railway lines data was
available for the entire state or larger parts of states than required for our study area and
69 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
needed to be “clipped” to the study area. Coal quality data and other coal related
parameters were in the form of a table and added to the ArcGIS as attributes of the area.
As distance from roads, forest and other infrastructure is more important than the physical
presence of these structures in the area, the distance images for these features were
created using the “Distance” command of IDRISI, after exporting the layers to IDRISI. The
distance image is a Euclidean crow flight distance between each cell and the nearest of a set
of target features and is more appropriate for analysis, as ranking on the basis of distance
from features is more meaningful in spatial analysis for sites.
5.7 Data Layers
The following data layers (Figures 5.1- 5.14) were created in ArcGIS, after importing from
different databases, projecting and adding attributes. These layers constituted the base
data for further analysis and use in the modeling part. Coal rank, seam depth, coal seam
thickness and coal quality data was taken from the SECARB study and extrapolated to the
study area [SECARB. 2011]. Permeability and dip data was based on the average trend of
the basin.
Figure 5.1: Coal rank layer for the study area
70 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
Figure 5.17: Aquifer close to the area
5.8 Working in IDRISI
The raster data was organized in IDRISI for preparation, displaying and subsequent use for
the decision model. The NLCD, digital elevation data (DEM), topographic data was already
in the raster or geo-tiff format, however all other layers and polygons data was in the
vector shapefile format. In order to import this data, the module shown in Figure 5.18 was
used. This part of model imported shapefiles to IDRISI, projected them to Universal
Transverse Mercator (UTM) zone 17 for Virginia and converted them to raster. For
conversion into raster, an initial raster file using the “Initial” command of IDRISI was used.
The spatial parameters were defined as per coordinates of the existing polygon file. To
cover the entire study area, 1564 columns and 584 rows with a cell size of 30 m x 30 m
were created. This initial file was used to specify spatial parameters for subsequent data
layers.
79 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
Figure 5.34: 500 m buffer around residential areas
5.10 Modeling
After preparation of data layers and data images, actual modeling for site selection was
carried out. The first step in modeling was classification of factors for further analysis and
use in the model. The procedures described herein for the modeling are based on the
m5.1od0u.1le Csl farsosmif iIcDaRtiIoSnI s ooff tFwaacrtoe[rCsl ark Labs 2012]
For modeling, the data layers were divided into three parts.
i. The factors that trade off
ii. The factors that do not trade off
5.10.1.1 Factors that Tradeoff
iii. Constraints
These factors enhance the suitability of site and it is preferred if these are available at the
site, however they are allowed to tradeoff in such a way that one factor can compensate for
the shortcoming of any other factor or factors. The level of trade off defines the degree to
which one factor can make up for the lack of other factors. The weight of the factors defines
88 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
the level of trade off for each factor. In this category, the following data layers were
included.
i. Moisture content
ii. Ash
iii. Sulfur
iv. Carbon content
v. Volatile matter
vi. BTU per pound
vii. Coalbed methane and gas content
5.10.1.2 Factors that do not Tradeoff
viii. Forests (instead of using forests, a distance image from forest was used )
These factors are very important for the site selection and form the basis of site selection
criteria. These are equal in weight and importance and therefore cannot be allowed to
compensate for the lack or excess of one another and they all must be present at
reasonable level to declare the site most suitable for the project. That is why these factors
are not allowed to tradeoff and are assigned equal weights in making the decision rule. The
followin g data layers were included in this category
i. Coal rank
ii. Seam thickness
iii. Seam depth
iv. Seam inclination/dip
v. Permeability
vi. Hydorology of the area
o Presence of aquifers
Major Water bodies
vii. Infroastructure availability
o Distance from Primary Roads
Distance from Railroads
89 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
5.10.1.3 Constraints
Constraints are also called crisp factors [Carstensen 2011]. These criteria must be met for a
site to be suitable for selection. These are Boolean factors in either “YES” or “NO” having a
value of ‘0’ or ‘1’. These are generally imposed either by regulatory restrictions or by
company policies. In this category the areas covered by crops and developed/residential
areas were included. The area within 100 m of a major water body was also set as a
c5o.1n0st.2ra Sintat nfodra sridteiz saetlieocnti oonf .F actors
The data layers selected for modeling were in different units. It is difficult to compare and
weight the data when they are in different units. For this purpose standardization of factors
was done. For constraints, the data is either ‘0’ or ‘1’, where ‘0’ means not included in the
decisions set and ‘1’ means included in the selection criteria. However, for factors the
standardization was done through “Fuzzy membership”, where all the data was set to the
same scale range of 0-225. Fuzzy membership is based on different membership curves.
Linear or straight-line interpolation membership is very sharp from membership to non-
membership. J-shaped membership function gives a rapid drop in the membership either
immediately or at the end of the curve. S-shaped or sigmoidal curve gives a good range of
models and is very popular[Carstensen 2011]. The standardization process is discussed in
5de.1ta0i.l3 f oFra cetaocrhs f aWcteoirg,h ints t he model building stage.
Different factors have different significance when it comes to include them in the selection
decision, therefore weighting them accordingly is very critical. In this model, Saaty
Analytical Hierarchy Process (AHP) was applied for determining the weights. These
weights were based on opinions and estimations from literature and were then statistically
checked for consistency. These assigned weight values were combined in a pairwise
comparison matrix and a principle components analysis was run on the pairwise matrix to
provide a measure of optimal weights and a check for the consistency of the comparison
[Carstensen 2011]. The process is elaborated in detail in the model building stage.
90 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
5.10.4 Trade off levels and Risk Assessment
The level of tradeoff allowed between factors, factor weights and relative balance of factor
weights determine the amount of risk that the site selected for the project is suitable in
reality or not. Levels of tradeoff and factors weights determine the risk between the
extreme levels of very risky and very risk averse scenarios. In IDRISI, the module Multiple
Criteria Evaluation (MCE) helps the user to define the level of tradeoff and risk. Three MCE
p5.r1o0c.4e.d1 u res areB omoloesatn cIontmermseocnti[oCna rstensen 2011].
In the Boolean intersection, all the factors are constraints or crisp factors with values either
‘1’ or ‘0’. This constraint-based model provides extreme cases of risks in the decision i.e.
very risky or very risk averse. There is no tradeoff between factors and no factor can make
up for the laa. ckA oNfD t hOev eortlhaey r. It combines the factors in two ways,
‘AND’ overlay is very risk averse or least risky model in which all the constraints need to be
met to place a site in the suitability zone. There is no tradeoff between factors and the site
must have be.v eOryRt Ohvinegrl arye quired and defined by the selection criteria.
‘OR’ overlay is opposite extreme of “AND” overlay where meeting only one constraint can
put the area in the suitability zone and is therefore most risky. There is no tradeoff
between factors as these are constraint-based models and constraints do not tradeoff.
5.10.4.2 Weighted Linear Combination
The weighted linear combination (WLC) allows complete tradeoff between factors and is
therefore in the central zone of risk. The factors are weighted according to their
importance in the criteria and highly weighted factors can compensate for lower weighted
factors, however the lowest weighted factor still contribute in determining the suitability of
sites. This model also allows constraints and after weighting and tradeoffs between factors,
these are multiplied with the constraints to exclude the areas that do not meet the
restrictions.
91 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
5.10.4.3 Ordered Weighted Average
Ordered weighted average (OWA) allows further control of trade off and risk by adding a
second weighting to the process called order weights. In this case, the factors first are
weighted differently to allow full tradeoff or all factors are assigned the same weight to
prohibit tradeoff between factors. Then a second weight is assigned to the factors that
determine the risk and tradeoff between factors. The relative balance of order weights
places the risk factor between extremes of “AND, risk-averse” or “OR, very risky” overlays
and tradeoff levels between “No tradeoff” to “Full tradeoff”. The triangle in Figure 5.35
represents the ordered weights strategy.
Full tradeoff
No tradeoff
Risk-averse RISK
(AND) Figure 5.35: OWA triangle (OR)
5.11 Decision Model
The decision wizard of IDRISI was used to build the decision model. First of all the factors
that do not trade off were run through the decision wizard. For these factors, appropriate
weights were chosen through WLC and these were allowed to fully trade off by choosing
“No OWA” which is equivalent to the top of OWA triangle with full trade off and average
risk. At this stage, no constraint was used. The resulting image gave the combined effect of
all the factors that tradeoff. Factors standardization was achieved using the “Fuzzy”
command, the rules of which are discussed with each factor image. Figure 5.36 shows the
92
FFO
EDART
Decision Strategy
Space
Risk-taking |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
start of the decision wizard, when the raster images of factors were imported into the
wizard.
Figure 5.36: Importing factors in the decision wizard
5.11.1 Fuzzy Rules
moisture content
After importing the factors that trade off to the decision wizard, the next step was to define
fuzzy rules for each factor. For of coal, J shaped monotonically
decreasing membership function was used to standardize the factor because some amount
of moisture is required (>1%) in coal for the gasification process to take place and in
determining the nature of the product gas [Walters and Nuttall 1977]. However, when the
amount of moisture increases in the coal above 20% it obstructs the gasification process
and can deteriorate the quality of gas. Thus, the J shaped monotonically decreasing function
give highly suitable values for less moisture contents and then rapidly decreases the
suitability after the moisture contents increase to a certain limit. As the moisture contents
in this coal were relatively small (<2%), the J-shaped function for this factor was used with
suitability rapidly decreasing after 1.2% (point c) and becoming zero at 2.0% or more
moisture (point d). Figure 5.37 shows the screen shot of the image defining fuzzy rules for
moisture content for standardization.
93 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
5.11.2 Weighting the factors
After standardization of factors, the next step was to assign weights to the factors. For this
purpose, the analytical hierarchical process (AHP) was used where a pairwise comparison
approach was applied to derive the factor weights. The pairwise comparison was based on
a 9-point continuous rating scale, with “9” being extremely important relative to other
factors and “1/9” extremely less important. The weights were then produced using the
principal eigenvector of the pairwise comparison matrix. The module also generated a
consistency index based on the computed weights by comparing one weight to the others.
A consistency ratio less than 0.1 indicated acceptable weighting and if the consistency ratio
was more than 0.1, then the rating of factors had to be re-evaluated until the best fit
weightings were achieved, marked by a consistency ratio less than 0.1. Figures 5.53 and
5.54 show the pairwise comparison matrix and calculated weights and consistency ratio for
this comparison. Carbon content and BTU/lb. have the highest weights whereas sulfur
contents and distance from forest got the lowest weights. The consistency ratio for this
comparison was 0.05. The module gave the control over the weighting procedure and
provided the flexibility to assign the ranking to factors as desired by the planner or
company.
Figure 5.53: Pairwise comparison for factor weights
101 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
Figure 5.56 shows the final image after running through the whole process. It shows that
highest suitability ranking for a site is 247 whereas lowest suitable site has a ranking of 82
after combination of factors that can tradeoff.
Figure 5.56: Final image after processing factors that tradeoff
5.12 Modeling for Constraints and Factors that do not Tradeoff
The next step was to process this newly created factor image, the factors that do not
tradeoff and the constraints. The constraints used for this model were crop/agricultural
land, residential areas and major water bodies. The constraints were set in such a way that
the areas around 100 m of major water bodies and 500 m of crops/agricultural lands and
developed areas were not considered for site selection. Thus, a buffer distance of 100 m
around major water bodies and 500 m around crops and residential areas was created and
a value of ‘1’ was assigned to the areas beyond that buffer zone and ‘0’ for the areas within
the buffer zone. Any factors or scores within the buffer zone would multiply with zero and
be automatically declared as unsuitable for selection. Figure 5.57 shows the constraint set
for processing.
103 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
decreasing sigmoidal function was used, as the farther the distance from these features, the
lesser the suitability of the sites. For coal rank, dip, permeability and distance from aquifer,
the monotonically increasing sigmoidal function was used, as the higher the values of these
features, the more suitable the site for selection. For coal seam depth, the J-shaped
symmetrical function was used because very shallow and very deep seams make the site
unsuitable. At shallower depths, the subsidence and gas leakage is pronounced whereas at
higher depths drilling cost and pressure maintenance requirements increase significantly.
Thus, a seam 150 to 500 m deep is considered most suitable. Figure 5.59 shows the fuzzy
rules defined for depth. The suitability increases from 100 m to 500 m (points a & b), levels
off until 700 m and then declines (points c & d).
Figure 5.59: J-shaped symmetrical function for standardization of depths
5.13 Ordered Weighted Average (OWA) and Risk Assessment
After standardization and weighting of factors, the next step was to define OWA weights for
the factors. The OWA weights determine the amount of risk and level of tradeoff allowed
between factors. They give control over factors by applying a second weight to factors. For
this model, the following four cases for OWA were considered, though several other
scenari os could be generated based on OWA weightings.
i. Least Risk-No Tradeoff (AND overlay)
105 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
Figure 5.67: Suitability image Average risk full tradeoff scenario
5.14 Decision Hardening
After producing the suitability images and risk levels for the entire area, the final step was
to harden the decision based on further conditions and constraints. This final step was the
actual selection of most suitable areas from the ranked images by imposing more
average
conditions. The procedure is the same for any image generated by the decision wizard. For
risk-no trade off
example for this model, the area of interest was based on the suitability image of
with suitability index lying between 150 and 206. The top 15-20 highest-
ranking areas would be selected for final consideration.
For this step, the “Reclass” command was used to exclude the areas having a suitability
index less than 150. Then the “Extract” command was used with the study area image as a
feature definition image, to get the details of areas falling within the range of 150-206
suitability indexes. The resulting image is shown in Figure 5.68.
111 |
Virginia Tech | Chapter 5 – GIS Model for Site Selection
The final image of the group of areas that are top candidates for selection is shown below.
This data was based on polygons of the areas and limited to selected regions of Appalachia.
When planning at regional-scale levels, e.g. for the entire basin or entire state, the zip
codes, census tracts or larger area polygons would be more appropriate for modeling and
site analysis.
Figure 5.69: Top-ranking sites for selection
5.15 Chapter Conclusions
This model gives a tool to select suitable sites for selection based on predefined criteria for
selection. It helps in ranking the sites based on their suitability and level of risks involved
in decision-making. The model gives a great flexibility in weighting the factors as per their
importance in the defined criteria, establishing the constraints based on restrictions (legal,
environmental, regulatory etc.) and finally selecting the levels of risk in the final decision.
This model is not site specific and procedures described here can be applied to any site
provided the data is available for that site. The model though applied to UCG, is applicable
to any case where a suitable site is to be selected based on a defined criteria or rules.
113 |
Virginia Tech | Chapter 6 – Sustainability Assessment of UCG
6.1 Introduction1
UCG is attracting considerable global attention as a viable process to provide a “clean” and
economic fuel from coal. Applying improved UICG technology to gasify deep, thin, and low-
grade coal seams could vastly increase the amount of exploitable reserves. However, it is
imperative that further development of this technology is based on integrating UCG
practices and potential environmental impacts with accepted sustainability frameworks
and processes. This chapter evaluates the potential of UCG to conform to frameworks such
as MMSD (Mining, Minerals and Sustainable Development), Natural Step and Green
Engineering, in order to define its “sustainable” potential. The chapter also discusses the
potential contributions of UCG to sustainability during its design, operation, closure, and
post-closure phases. The potential economic and environmental benefits and associated
hazards of UCG necessitate that this technology is developed in line with sustainable
development principles and UCG projects should conform to different accepted and
recognized frameworks used to assess whether a project can be labeled as “sustainable”.
6.2 Sustainability and Sustainable Development
Sustainability and sustainable development are different concepts and it is a
misunderstanding to use them interchangeably. Sustainability is the ability of the system to
withstand external shocks and pressures from social, environmental and economic needs
and return to normal functioning after enduring these shocks [Shield, Solar et al. 2006].
Keeping in view the exhaustible nature of mineral resources and environmental
implications of mining, some consider it an oxymoron to use the term mining sustainability
Assessing the Contribution of
U1nderground In-Situ Coal Gasification (UICG) within a Sustainable Development Framework
This chapter is based on the following paper: Hyder, Z., Karmis, M.,
, Aachen
th
International Mining Symposia, 5 International Conference, “Sustainable Development in the Minerals
Industry, SDIMI 2011” June 14-17, 2011 Aachen Germany, Pages 569-579.
The text is modified and formatted to fit the dissertation format.
114 |
Virginia Tech | Chapter 6 – Sustainability of UCG
[Van Zyl and Gagne 2010]. On the other hand, the concept of sustainable future
development includes mining and mineral resources as integral part, necessitating
strategies to develop these resources by integrate environmental concerns, economic
development, social integrity and effective governance [Shield, Solar et al. 2006].
Sustainability assessment varies from case to case and involves multiple factors, site-
specific characteristics and layers of uncertainty. As there are no clearly defined and
mutually agreed criteria, the assessment exercise is not a precise process [Gibson, Hasan et
al. 2005]. Like environmental assessment, sustainability assessment focuses on process
and depends mainly on designing assessment regimes and decision-making strategies,
however, its scope is broader than environmental assessment [Gibson, Hasan et al. 2005].
Thus to overcome these hurdles in assessment processes, several frameworks have been
developed to facilitate development, assessment and measurement of strategies that
enable sustainable development. While there is no ideal system, a number of accepted
frameworks can be used as starting points for measurement and planning of sustainability.
This chapter will focus on some of these frameworks and examine their applicability to the
UCG technology.
6.3 Mining, Minerals and Sustainable Development
The main objective of the Mining, Minerals and Sustainable Development (MMSD),
launched in 1999, was to assess and facilitate the transition of the mining and minerals
sector toward a more sustainable future [IISD. 2002]. Under Task 2, the MMSD developed
criteria for assessing the contribution of any project, including mining and minerals
projects, towards sustainability. A framework was developed to assess whether or not a
project has a net positive contribution towards sustainability. This framework is composed
of seven parts, each in the form of a question that must be answered for the specific project
and conditions [IISD. 2002].
The following discussion presents the seven questions, objectives, and their
correspondence to a generalized UCG process. Indicators, examples and matrices that are
more specific can be developed for particular sites of interest based on this framework.
115 |
Virginia Tech | Chapter 6 – Sustainability of UCG
6.3.1 MMSD Question 1 - Engagement
Are engagement processes in place and working effectively?
6.3.1.1 Objectives
Stake holders identification and engagement, dispute resolution mechanisms, adequate
resources, reporting and verification [IISD. 2002]
6.3.1.2 UCG Conformity
Major stakeholders in UCG include investors, surface right holders, leaseholders, the
surrounding community, federal and state departments, workers, product end users (e.g.,
power generation plants and consumers) and other people affected directly or indirectly by
the project. The idea of stakeholder mapping is to recognize each having conflicting
interests in the projects and to develop a strategy for conflict resolution by including all the
stakeholders in the decision making process at various stages of project. This may give rise
to a project development that is welcomed by at least a majority of stakeholders. In case of
UCG, the potential conflict may be between the coal or coalbed methane leaseholder and
the UCG Company. However, since potential UCG sites are abandoned mines, low quality
uneconomic coal and deep seated, steeply dipping coal seams [Lamb 1977], the chances of
dispute are minimized. Secondly, due to minimal disruption at the surface and minimized
land acquisition and rehabilitation requirements [Ghose and Paul 2007], the stakeholder
disagreements concerning surface rights and surface reclamation are also manageable.
Thus the engagement process is an important, and in the case of UCG, a promising task,
involving educating the community about the potential economic and environmental
promises of UCG, redressing the environmental concerns, fostering respect for the social
values and promoting inclusion of stakeholders in the development of formerly
uneconomic resources through this technology. Participation is an important indicator of
the social aspect of sustainability and helps in quantification of equity by calculating
distribution of wealth/benefits within the society [Becker 1997]. Thus, the participation of
different stakeholders especially in the formulation of policies directly influencing the
community is necessary. This can help improving the status of UCG in public perception.
116 |
Virginia Tech | Chapter 6 – Sustainability of UCG
6.3.2 Question 2 - People
Will people’s wellbeing be maintained or improved during and after the project or
operation?
6.3.2.1 Objectives
Community organization and capacity, social and cultural integrity, worker and population
health and safety
6.3.2.2 UCG Conformity
An important part of corporate social responsibility (CSR) is improvement in the standards
of social development and respect for fundamental rights [Shield, Solar et al. 2006].
Keeping in view the economic and environmental aspects of UCG, the development plan
should incorporate human wellbeing as an integral part. This may be accomplished by
hiring local labor, creating training and development opportunities in the project area,
assisting infrastructure development and developing social capital. The economic
opportunity provided by the development of neglected mineral resources through UCG
should not be a “resource curse” for the community [Davis and Tilton 2005] but an
opportunity to develop the community through careful planning, transparency and
inclusion. Traditional economic activities in the area should also be promoted to avoid the
“Dutch disease”, which suggests that increased natural resource exploitation in the area can
result in increased labor cost, appreciated local currency and a neglect of manufacturing
and agricultural sector, thus an increased export cost for these sectors [Davis and Tilton
2005]. Economic diversification can help in reducing the boom and bust syndrome.
Investment in human and social capital can assist in increasing the level of social license to
operate. Incorporating people’s wellbeing and meeting these concerns in a UCG
development plan may be a positive experience, as this technology has potential to give rise
to other economic activities such as power generation plants, various chemical industries
and a potential site of carbon sequestration process [Clean Air Task Force 2009].
117 |
Virginia Tech | Chapter 6 – Sustainability of UCG
6.3.3 Question 3 - Environment
Is the integrity of the environment assured over the long term?
6.3.3.1 Objectives
Ecosystem function, resilience and self-organizing capacity, ecological entitlement, full
ecosystem costs, benefits and risks, responsibilities and sureties, environmental stress and
action to ensure ecosystem integrity
6.3.3.2 UCG Conformity
UCG can be utilized to exploit uneconomic, deep seated, low quality coal reserves [Lamb
1977] in an environmentally friendly manner. In addition, the method has a reduced
surface footprint over conventional mining, due to lack of transportation and waste
management infrastructure requirements and it is environmentally attractive [Creedy,
Garner et al. 2001; Burton, Friedmann et al. 2006; Meany and Maynard 2009]. However,
studies have shown that UCG has potential for creating environmental problems, such as
groundwater contamination through gas escape and leachate [Sury, White et al. 2004].
Other hazards include surface subsidence, hazardous atmospheric emissions, uncontrolled
cavity growth and human impacts (noise, dust, increased traffic etc.). Potential
environmental hazards of UCG can be mitigated effectively through careful site selection,
appropriate operational controls, proper shut down process and effective environment
monitoring [Sury, Kirton et al. 2004]. The syngas produced by UCG contains a mixture of
2 2 4 2 3
CO ¸CO, H , CH , water and traces of pollutants such as H S, HCN, NH and other
gases[Creedy, Garner et al. 2001; Burton, Friedmann et al. 2006]. The composition of raw
product gas is similar to that produced by surface gasifiers, and cleaning technology for
such gas compositions is already available [Creedy, Garner et al. 2001]. In order to avoid
flow of contaminations from the cavity to the underground water table and to minimize
loss of organic laden gases, the pressure in the UCG cavity must be maintained below
hydrostatic. This will ensure a small and continuous influx of water into the cavity to aid
the burning process and minimize environmental impacts [Shafirovich and Varma 2009].
UCG consumes the coal underground and produces a burn cavity in the subsurface. This
cavity increases in dimensions with the progress of the process and can result in potential
118 |
Virginia Tech | Chapter 6 – Sustainability of UCG
surface deformation. [Friedmann 2009]. UCG-induced subsidence is expected to progress
depending on the geometry of the cavity and depth—the greater the depth, the smaller the
chances of subsidence, depending upon the mechanical properties of rock and stress
regime in the area [Creedy, Garner et al. 2001]. However, most experimental work
conducted for UCG did not report any significant surface subsidence, possibly because of
the small size of active operations. For commercial large-scale projects, however, more
focused research is required for assessing, managing and reducing the subsidence impacts
of UCG [Friedmann 2009] .
2
CO is a main component of UCG product gas and may be present in the range of 25-40%.
Integration of UCG with carbon capture and sequestration (CCS) may result in a critical
climate change mitigation technology to produce power from coal, and many studies
suggest it as a low cost, above ground, low carbon form of coal power production [Redman,
Fenerty et al. 2009]. This indicates that UCG is a technology that has environmental
promise and presents an excellent solution particularly for extracting energy from
“unminable” coal seams. Through careful planning and proper site selection, the hazards of
UCG can be minimized significantly. It is worth noting that only two of over 30 UCG trials in
the U.S. resulted in clear evidence of environmental contamination [Burton, Friedmann et
a6l..3].. 4 Question 4 - Economy
Is the economic viability of the project or operation assured, and will the economy of
the community and beyond be better off as a result?
6.3.4.1 Objectives
Project or operation economics, operational efficiencies, economic contributions:
annual/total, community/regional economies
6.3.4.2 UCG Conformity
UCG not only provides an excellent economic opportunity by developing otherwise
uneconomic, abandoned and discarded natural resources but also by promoting a
polygeneration mix of industries such as power generation, chemicals, Fisher-Tropsch and
methanol. UCG helps in maximizing indigenous energy reserves, reduced vulnerability to
119 |
Virginia Tech | Chapter 6 – Sustainability of UCG
imported oil and security of supply [Courtney September 2009]. UCG increases the amount
of coal available by exploiting unminable coals. As indicated by Courtney [Courtney
September 2009], estimated total world coal resources are 5-8,000 billion tons (Bt) with
proven coal reserves of 909 Bt as of 2009. The estimated addition by UCG is 600 Bt.
Similarly, Burton et al. [Burton, Friedmann et al. 2006] suggest a possibility of a 300-400%
increase in the recoverable U.S. coal reserves through the application of UCG. This provides
an economic opportunity in the project areas, and may help in the development of
infrastructure, enhanced health and educational facilities and improved community
relations through sharing of benefits, costs and risks. As indicated by Yang et al. [Yang,
2
Zhang et al. 2008] UCG can prove an excellent source for large scale H production. UCG can
utilize the current infrastructure for gas transportation, if available in the vicinity, thus
reducing capital and operating costs of the project. Ze-gen, Inc. is planning to develop this
technology in small modules that can be used to provide product gas to existing industrial
consumers of natural gas and fuel oil or alternatively, to blend the natural gas with product
gas through the existing pipeline infrastructure [Redman, Fenerty et al. 2009]. This
p6r.3o.v5i deQsu aensottihoenr 5e c-o Tnroamdiicti boonoaslt a fnord tNhiosn t-emchanrokleotg Ay.c tivities
Are traditional and non-market activities in the community and surrounding area
accounted for in a way that is acceptable to the local people?
6.3.5.1 Objectives
Maintenance of activity/use levels, maintenance of traditional cultural attributes
6.3.5.2 UCG Conformity
Since UCG provides an economic resource for the community, it is emphasized that
operations must be planned to avoid conflicts with traditional and non-market activities in
the project area. This is feasible in the case of UCG, since it has minimal surface disruption.
The operations may be planned to avoid hunting areas or fishing ponds, if any, in the
project area. Vocational training institutes can be developed in the area to promote
housework and traditional crafts. Respect and preservation of local religious sites and
120 |
Virginia Tech | Chapter 6 – Sustainability of UCG
customs is important, in not only improving traditional and non-market activities, but also
6in.3 th.6e tQruuset sbtuioilnd i6n g- fIonrs ttihteu ptiroonjeaclt A. rrangements & Governance
Are rules, incentives, programs and capacities in place to address project or
operational consequences?
6.3.6.1 Objectives
Mix of rules, market incentives, voluntary programs and cultural norms, capacity, bridging,
confidence that commitments made will be fulfilled
6.3.6.2 UCG Conformity
Since UCG is currently emerging as a “clean” and economic alternative of energy generation
from coal, several governments are interested in capitalizing on this opportunity. For
example, a commercial trial is undergoing in Australia with government support. Similarly,
governments in China and India are encouraging UCG amid internal and external pressures
of pollution control and environment management [Creedy, Garner et al. 2001]. The
incentives in the form of cap and trade legislations and carbon credits have promoted
interest in this technology. Several research and development institutes are established in
various countries to promote research on UCG and its integration to CCS [Creedy, Garner et
al. 2001]. In addition, regulations regimes are either in place or currently being drafted in
various countries regarding UCG [Creedy, Garner et al. 2001; Sury, Kirton et al. 2004; Sury,
W6.3h.i7te eQt uael. s2t0io0n4 ]7. - Synthesis and Continuous Learning
Does a full synthesis show that the net result will be positive or negative in the long
term, and will there be periodic reassessments?
6.3.7.1 Objectives
Continuous learning and improvement, overall synthesis, strategic level alternatives.
6.3.7.2 UCG Conformity
The overall status of UCG development is very encouraging and positive. A renewed
interest is emerging worldwide in this technology, leading to more research and
development. For example, China is developing technology to apply UCG on abandoned
121 |
Virginia Tech | Chapter 6 – Sustainability of UCG
mineshafts and has executed at least sixteen pilot projects since 1991 [Ray, Panigrahi et al.
2010]. Similarly, India, Australia, Europe, UK, New Zealand, Japan and several other
countries are promoting UCG research. In the U.S., though federal and state governments
are not currently funding research, several private companies and organizations are
encouraging R&D in this field. This ensures that a continuous learning process is in place
for this technology, resulting in improvement of operational processes, environment
monitoring, capacity building and human capital.
6.4 The Natural Step (TNS) Framework
The Natural Step (TNS) framework is a tool that provides a systematic way of
understanding and planning towards sustainable development. The main concept of this
framework is simplicity without reduction, which means that understanding the defining
principle of a given system makes it easier to comprehend the complexity of details within
the system.[Broman 2000]. TNS is a comprehensive model of strategic planning and
decision making towards sustainable development. The framework has the following main
compon•e nts [Townsend and MacLellan 2010]:
•
The funnel
•
The sustainable principles for a sustainable society
•
Backcasting
6.4.1 TheA F fuonurn estla ge ABCD strategic planning process.
The funnel is a metaphor that represents the degrading nature of available resources and
ecosystem, with the narrowing walls of a funnel indicating the decreasing options to
operate [Broman 2000]. These walls grow closer because of non-sustainable activities,
growing demand of resources and declining ability of earth to provide these resources.
However, an indicator of sustainable development is a system or process that has the
capability to widen the narrowing walls of this funnel. UCG is a process that has the ability
to increase available coal reserves through exploitation of low-grade, uneconomic, deep-
seated coal seams. It also has the ability to harness energy from the abandoned or
122 |
Virginia Tech | Chapter 6 – Sustainability of UCG
previously used coal mines, some of which may contain as much as 50% of the original coal
[Lamb 1977]. The challenge for sustainability is to avoid hitting the wall while reducing the
pressure so that the funnel may open again [The Natural Step USA]. UCG satisfies this
6ch.4a.l2le nTgeh eb ys uinsctraeiansaibnlge r pesroinucricpelse asn fdo rd aec sruesatsaining aebcloen soomciice tayn d environmental pressures.
Bas•ic principles for sustainability are defined in TNS [Broman 2000] as:
For osociety to be sustainable, the ecosphere must not be systematically subject to
o increasing concentrations of substances from the earth’s crust
o increasing concentrations of substances produced by the society
•
impoverishing physical manipulation or over-harvesting
For society to be sustainable, resources must be used efficiently and fairly to meet
basic human needs worldwide.
UCG conforms to these principles as it utilizes the substances from the earth’s crust that
have been discarded and declared uneconomic, used inefficiently or still in place due to
technical limitations. It increases the earth’s resource potential. Although this process uses
the substances from earth’s crust, it provides an alternative and efficient way of using the
abandoned resources. UCG is very efficient in energy utilization as it eliminates the energy
wasted in transportation of the mineral waste and usable material to the surface from
underground [Burton, Friedmann et al. 2006]. Similarly, it harvests energy from the
p6r.4e.v3i ouBsalyc kwcaasstteidn gm aterial and re-circulates that energy into the system.
Backcasting is a methodology in which a successful and sustainable outcome of an activity
is envisioned and strategies are developed to link that outcome to the present situation.
The processes are then developed based on sustainability principles to attain the desired
outcome. The sustainable future of UCG technology is a product gas without any hazardous
discharges and a clean energy source utilizing natural resources efficiently. Thus
backcasting promotes the integration of UCG with carbon capture and storage (CCS)
2
technology. The ideal scenario is to capture CO from the product gas at the source and
sequester it near the project area to avoid transportation costs. The technology exists to
123 |
Virginia Tech | Chapter 6 – Sustainability of UCG
2
capture CO from the syngas and to store it in the geological formations; however, research
2
is in progress on the reactor zone carbon sequestration, aiming to store CO in the voids
a6n.4d. 4c avTithiees A cBreCaDte Pdl abny nUiCnGg pPrroocceesssse s [Friedmann 2009].
The ABCD process is an integral part of the TNS framework that helps strategic planning
for the sustainable future. ABCD consists of four basic steps; Awareness, Baseline Analysis,
Compelling Vision and Down to Action [The Natural Step USA]. Awareness of sustainable
development principles enables organizations to develop strategies to achieve the desired
outcomes through inclusion of sustainable development principles in corporate planning.
Baseline analysis helps in the assessment of current situations and points out the activities
and practices that are violation of these principles. Compelling visions are solutions and
innovations that are obtained by applying the constraints of sustainable development
principles. Down to Action represents the actual implementation of developed strategies
and solutions. These planning steps advocate further research and development on
integration of UCG and CCS to achieve the envisioned future of UCG as a clean energy
generating technology.
6.5 Green Engineering
Green Engineering is a framework for creative engineering solutions and innovative
approaches to solve the problems involving environment, economy and society throughout
the lifetime of the project. The framework consists of 12 principles that provide a basis for
making engineering solutions more sustainable. These principles are set as guidelines to
use in sustainable development and to address sustainability challenges through effective
design [Mihelcic and Zimmerman 2010] . Table 6.1 presents the principles of green
engineering and their application to UCG in general. Specific design details, which
correspond to these principles, largely depend upon the specific site conditions and project
environment; however, Table 6.1 below presents the general conformity of UCG to these
principles.
124 |
Virginia Tech | Chapter 6 – Sustainability of UCG
Table 6.1: UCG and Green Engineering Principles
12 Principles of Green UCG Compatibility
Engineering
[Anastas and
Zimmerman 2003]
Designers need to strive to The input for UCG is low quality and deep-seated
ensure that all materials and coals and/or abandoned coal insitu. This coal
energy inputs and outputs are as generally is not recovered by conventional
inherently non-hazardous as mining and is a wasted resource. UCG utilizes
possible. this wasted material and converts it into usable
energy. The output is in the form of syngas that
has the potential, through the integration of UCG
and CCS, to be converted into an economical
clean energy source. This principle is compatible
with the technology and ensures that strategies
of environment management are included in the
development plan for UCG projects.
It is better to prevent waste than UCG process reduces the waste production as no
to treat or clean up waste after it coal is transported to the surface. It reduces the
is formed. resulting dust as well. Majority of hazardous
materials including ash and many pollutants
(mercury, particulates and sulfur species) are
greatly reduced in volume [Burton, Friedmann
et al. 2006]. As there is no or minimal water
discharge to the surface, wastewater
management is very easy. This makes UCG
conformable to this principle.
125 |
Virginia Tech | Chapter 6 – Sustainability of UCG
Separation and purification Research is continuing to integrate carbon
operations should be designed to capture and sequestration to UCG at the site,
2
minimize energy consumption thus reducing CO and transportation costs and
and materials use. energy consumption. The surface footprint of
UCG is less, compared to other coal exploiting
technologies. The purification of product gas can
make it a “clean energy” source.
Products, processes, and systems UCG increases the efficiency of energy
should be designed to maximize production and can enhance the energy recovery
mass, energy, space and time from coal seam over 75% [Burton, Friedmann et
efficiency. al. 2006]. It also recovers entrapped methane
from the coal seam regardless of its economic
value thus maximizing energy efficiency.
Products, processes and systems The input for this process is air or oxygen at
should be "output pulled" rather elevated pressure and low quality coal, whereas
than "input pushed" with energy output is a flow of product gas at high
and materials. temperature and pressure. This product gas
generates energy for several uses. Utilizing
lower cost inputs, a valuable output is obtained.
Embedded entropy and Though UCG is non-renewable and does not
complexity must be viewed as an support recycling, it reuses abandoned
investment when making design resources. It can increase the energy efficiency
choices on recycle, reuse or from exploitation of natural resources.
beneficial disposition.
Targeted durability, not The process is durable depending upon the
immortality, should be a design extent of coal availability. It also promotes other
goal. industrial venues in the project area thus
increasing economic life of the project.
Design for unnecessary capacity The design for UCG depends upon specific site
or capability (e.g., "one size fits characteristics and geo-mechanical conditions of
126 |
Virginia Tech | Chapter 6 – Sustainability of UCG
all") solutions should be the area. It is difficult, therefore, to generalize
considered a design flaw. and implement the design from one site to other.
Thus, it promotes design for each site that is
based on general principles, but accommodating
the peculiarity of site.
Material diversity in multi- Since there is minimal disruption of surface for
component products should be this technology, the land use for various
minimized to promote purposes is encouraged during and after the
disassembly and value retention. closure of project. As an example, it does not
interfere with hunting habitats in the project
area.
Design of products, processes UCG provides more economic potential if the use
and systems must include of product gas be in close proximity of the
integration and interconnectivity project area. A power generation or chemical
with available energy and plant directly fed by product gas can increase
materials flows. economic efficiency and reduce the
transportation costs.
Products, processes and systems The product gas can be used as a feedstock for
should be designed for several chemical industries. It can be used as a
performance in a commercial fuel for power generation. Thus, the energy
"afterlife." provided by this technology can enhance value
addition for several products.
Material and energy inputs The UCG and other mineral related processes
should be renewable rather than are not regarded renewable. However, with the
depleting. development of new technologies that offset the
cost-increasing effects of depletion, mining can
be sustainable. This means that with the
increasing cost of mineral commodities due to
decrease in their availability or exploitation of
low-grade reserves, there is also an increasing
127 |
Virginia Tech | Chapter 6 – Sustainability of UCG
trend towards development of technologies that
are low cost and can offset this upward
pressure. Thus, cost-increasing effects of
depletion and cost-decreasing effects of new
technologies determine the long run availability
of mineral commodities [Tilton 2009]. This
opportunity cost paradigm is also applicable to
UCG and according to [Tilton 2009], is a more
appropriate way to assess the future threat of
depletion to sustainability.
6.6 Other Sustainable Development Frameworks
Several other frameworks are available that can effectively help in assessing the
contribution of a project towards sustainable development. The examples include 10
principles of ICMM (International Council on Mining and Metals), Design for X, Life Cycle
Assessment (LCA) and several others. The 10 principles of ICMM are essentially based on
issues defined in MMSD and provide a framework for comparing the current standards
with relevant conventions and guidelines, for example, the Rio Declaration, the Global
Reporting Initiative, the Global Compact, OECD Guidelines on Multinational Enterprises,
World Bank Operational Guidelines, OECD Convention on Combating Bribery, ILO
Conventions and the Voluntary Principles on Security and Human Rights [ICMM. 2010].
Similarly, a number of indicators and matrices are available that evaluate and assess the
contribution of any project towards sustainability. For example, the United Nations has a
comprehensive set of indicators that measure sustainable development and provide a
framework and methodology to attain sustainability [Division for Sustainable Development
2001]. UCG as a promising new technology conforms to several of these other frameworks
and, depending upon specific site conditions, indicates the potential for positive social,
economic and environmental correlation towards sustainable development and a
sustainable future.
128 |
Virginia Tech | Chapter 6 – Sustainability of UCG
6.7 Chapter Conclusions
As stated by Gibson et al. [Gibson et al. 2010], assessments are exercises in evaluation and
decision making. They provide a number of options about further reviews, design changes,
impact evaluations and process improvement. The idea of sustainability assessments in the
case of UCG is to examine whether this technology has positive correlation with sustainable
development principles and explore further research and development opportunities. A
number of frameworks and indicators, developed over the last two decades to define
sustainability and sustainable development have gained acceptance as valuable tools for
understanding sustainability as a concept and incorporating sustainable development
principles as an integral part of corporate planning. UCG conforms readily to these
frameworks as indicated in this chapter. However, almost all frameworks indicate the need
to devote a major research effort towards integration of UCG and CCS. This integration can
help development of UCG as a clean, sustainable and economic alternative of energy
production through exploitation of unminable and abandoned coal reserves.
129 |
Virginia Tech | Chapter 7 - Greenhouse Gas Reduction Potential of UCG
7.1 Introduction2
Underground coal gasification (UCG) is an advancing technology that is receiving
considerable global attention as an economic and environmentally friendly alternative for
exploitation of coal deposits. This technology has the potential to decrease greenhouse gas
emissions during the development of coal deposits. The environmental benefits of UCG that
promote reduction in greenhouse gas emissions include elimination of conventional
mining, coal washing and fines disposal, coal stockpiling and coal transportation activities.
Additional benefits include; a smaller surface area requirement with minimal surface
2
disruption; removal of CO from the syngas at significantly reduced cost as compared to
4
carbon capture and transport from a power plant; and the potential to reduce CH
emissions, a potent greenhouse gas. UCG utilizes coalbed methane irrespective of its
4
economic value during the burning process and increases energy efficiency. The CH in the
product gas is consumed completely during power and/or electricity generation, thus
reducing overall methane emissions to the atmosphere.
This chapter compares greenhouse gas emissions from conventional mining methods to
UCG for the exploitation of a coal reserve. The findings indicate that UCG reduces
greenhouse gas emissions significantly as compared to other competitive coal exploiting
technologies. This research may help in the selection of a suitable method to develop coal
deposits when the reduction of greenhouse gases is an essential part of planning.
Underground Coal Gasification
2and Potential for Greenhouse Gas Emissions Reduction
This chapter is based on the following paper: Hyder, Z., Ripepi, N., Karmis, M.,
, CMTC 151155-MS, 2012 Carbon Management
Technology Conference, February 7–9 2012, Orlando, Florida, USA DOI: 10.7122/151155-MS, ISBN: 978-1-
61399-179-4.
The text is modified and formatted to fit the dissertation format and reproduced with permission of SPE.
130 |
Virginia Tech | Chapter 7 – GHG Reduction Potential
7.2 Factors aiding to GHG Reduction Potential
UCG has the potential to reduce greenhouse gas (GHG) emissions when exploiting a coal
reserve. The simplicity of the process, elimination of conventional mining, the complete
removal of coal transportation and stockpiling needs, reduced surface footprint, minimal
waste and water management requirements, consumption of coalbed methane and synergy
with carbon capture and sequestration are some of the factors that help in the reduced
GHG emissions. The following is a detailed account of these factors and their GHG reduction
potential.
7.3 Elimination of Conventional Mining
According to the International Energy Agency, the global demand for energy will increase
2
by one third between 2010 and 2035, with a 20% increase in energy-related CO emissions.
To meet this energy requirement, coal demand will continue to increase for the next ten
years and will then stabilize, ending around 17% higher than in 2010 [IEA. 2011]. This
highlights the importance of coal in the next generation’s energy mix and emphasizes the
need of concentrated efforts for promotion and development of new technologies that help
harvesting energy from coal deposits with reduced environmental impacts and GHG
emissions. A unique aspect of UCG that makes it an economically and environmentally
attractive technology is the elimination of conventional mining requirements for
exploitation of coal deposits, especially in low grade, thin coal seams. A life cycle
assessment of any coal mine reveals that a significant part of the total GHG emissions is
contributed by diesel, gasoline and electricity used by the equipment required for mine
development, processing, operation and coal transportation [Ditsele and Awuah-Offei
2010]. UCG eliminates the need for development of mining infrastructure such as shafts,
inclines, tunnels, galleries and panels, thus eliminating a large portion of GHG emissions
resulting from these activities. Ditsele and Offei indicate that the use of machinery and
equipment for mining activities contributes approximately 50% of total GHG emissions
from a mine [Ditsele and Awuah-Offei 2010]. This means that elimination of conventional
mining activities can result in considerable reduction of GHG emissions load.
131 |
Virginia Tech | Chapter 7 – GHG Reduction Potential
7.4 No Coal Transportation on Surface
With UCG, the entire coal gasification takes place underground; thus, there is no coal
transportation to the surface. This eliminates GHG emissions related to coal transportation
from underground workings to the surface for storage and distribution. It also eliminates
the need for surface gasifiers for conversion of coal to gas, which provides a significant
economic and environmental advantage. The product gas can be transported to gas
cleaning facilities and other industrial establishments for use via the surface pipeline
network, thus reducing the GHG emissions linked to coal transportation to industrial units
and/or electric power generation plants via diesel or gasoline transport. As indicated by
2
Jaramillo et al., gasoline and diesel transport adds between 17 to 20 g of CO per liter of
2
gasoline and between 21 and 25g of CO per liter of diesel [Jaramillo, Samaras et al. 2009].
Thus, a significant amount of GHG emissions can be reduced just by eliminating coal
transport to and from the surface.
7.5 No Storage Requirement
In the UCG process, there is no coal movement out of the strata. The coal is burned
underground and the product comes out in the form of heated gases, thus eliminating the
need for coal storage and stockpiling. Since there is no waste movement up ground as well,
this also eliminates the GHG emissions from the waste material that is usually dumped into
the spoil piles behind the active mining area and from the tailings and reject dumps of coal
preparation plants. These spoils and rejects contain significant amount of carbonaceous
4 2
material and generate greenhouse gases, especially CH and CO , along with some other
gases, through spontaneous combustion and low-temperature oxidation [Carras, Day et al.
2009]. A study by Carras et al. (2009) on greenhouse gas emissions from low-temperature
oxidation and spontaneous combustion at open-cut coal mines in Australia found that the
average emission rate of GHG for active spontaneous combustion with marked surface
-1 -2
2 4
signs was 8200 kgyr m . The average emission rates of CO and CH for rejects were 95
-1 -2 -1 -2
mgs m and 4.7 mgs m , respectively, from sites with high emission rates. Thus by using
the coal insitu, UCG reduces a significant amount of GHG by eliminating need for coal
stocking and processing.
132 |
Virginia Tech | Chapter 7 – GHG Reduction Potential
7.6 Reduced Surface Utilization
UCG requires a smaller surface area for exploitation of the coal reserve compared to other
coal exploiting technologies. This technology also has a smaller surface footprint at power
stations [Creedy, Garner et al. 2001]. As compared to conventional mining or surface
gasification plants, the surface impact of UCG is highly localized, as the primary process is
underground. This reduced surface impact and minimal surface requirement can improve
the reduction of GHG by preservation or regeneration of vegetation at the UCG site.
Another important aspect with respect to surface utilization is its availability for a mix of
different energy resources. A study by Chavez-Rodriguez and Nebra (2010), assessing the
GHG emissions from different fuel sources reveals the importance of including coal,
renewables, oil and nuclear in the energy mix. The study estimated that by 2030, in order
to fulfill the annual fuel requirement for the transportation sector of 1,924 GL of gasoline
and 444 GL of ethanol, 30.2 Mha of tropical forest or 2,373 Mha of dry land forest would be
required for gasoline GHG neutralization. If total ethanol demand was supplied by sugar
cane ethanol, an area of 57.7 Mha of production land would be required as well as 1.3 Mha
of tropical forests or 174 Mha of dry land forests as carbon uptake land [Chavez-Rodriguez
and Nebra 2010]. This indicates the importance of surface utilization from different energy
resources and highlights the savings that UCG can bring forward.
Ongoing advancements in drilling technology are further reducing the surface
requirements and disruption by UCG. For example, advances in directional drilling allow
horizontal inseam wells for better linkage between injection and production wells,
reducing the number of wells required to consume a specific deposit. A directionally drilled
600 m (1,968 ft.) long panel accessed by one well pair in a 7 m (23 ft.) thick seam can gasify
between 125,000 to 175,000 tonnes of coal (~137,000 to 192,000 tons), depending on
which panel design is used [Ahner 2008]. This means that to utilize a one million ton of coal
deposit, only 4 to 5 pair of wells will be required.
133 |
Virginia Tech | Chapter 7 – GHG Reduction Potential
7.7 Usage of CH
4
4 2
CH (methane) has a much lower concentration in the atmosphere as compared to CO , but
2 4
it is 23 times more potent as a GHG than CO [Wightman 2006; Archer 2011]. CH is usually
entrapped in the coalbed during coal formation. During coal mining activities, this gas
releases due to strata relaxation and changes in pressure gradient. An important
consideration is the total amount of methane entrapped in the coalbed. The amount of
methane released depends upon coal rank, seam depth and mining method, with
underground coal mining releasing more methane than surface or open pit mining because
higher gas contents are typically found in deeper seams [Irving and Tailakov 2000]. The
gas retained in coalbeds ranges from negligible quantity to about 900 cubic feet per ton
(25.48 cubic m per ton) [Delucchi 2003]. As estimated by the EPA, in 1997 the methane
emissions from coal mines were about 18.8 MMTCE, which accounted for 10% of the U.S.
anthropogenic methane emissions for that year [EPA. 1999b]. If the methane is not present
in commercial quantity, it is generally vented to the atmosphere through a ventilation
network or through degasification systems before or after mining. However, if it is present
in commercial quantities, it can be recovered through inseam drilling for commercial
purposes. If commercially recoverable quantities of coalbed methane are present, though,
then another dispute may arise in the sequence of energy recovery from coal seams [Couch
2009].
4
UCG utilizes coalbed methane irrespective of its commercial value. The presence of CH
may enhance the heating value of product gas and may aid the burning process. Thus,
whatever quantity of methane is present in the seam, UCG will consume it during the
burning process, which in turn, will reduce the GHG emissions load for coal utilization.
7.8 Carbon Capture and Sequestration Potential
An important aspect of UCG is its synergy with carbon capture and sequestration. As stated
2
by Burton et al. (2006), it is much easier to remove CO from the syngas than from the flue
2
gas. A number of technologies to remove CO from syngas are readily available [Burton,
Friedmann et al. 2006]. UCG provides low cost electricity generation from coal even with
2
CO capture, when compared with both IGCC and post-combustion capture from a
134 |
Virginia Tech | Chapter 7 – GHG Reduction Potential
pulverized coal power plant [Clean Air Task Force 2009]. Integration of UCG with carbon
capture and sequestration (CCS) may result in a critical climate change mitigation
technology. Many studies suggest it is a low cost, above ground, low carbon form of coal
power production [Redman, Fenerty et al. 2009].
2
UCG also provides an alternate for geological storage of CO . The well infrastructure of UCG
2
provides a source for geological storage of CO and results in reduced capital and operating
expenses for the combined process [Friedmann 2009]. As stated by Ray et al., coal
gasification with CCS, surface or underground, offers a practical medium-term option for
the continuing use of coal as a bridging strategy to eventual energy production with zero
emissions, i.e., renewable energy and the hydrogen economy [Ray, Panigrahi et al. 2010].
There is a developing interest in utilizing the UCG burn cavity for carbon sequestration, and
research is underway to further study in this potential of UCG and its environmental
impacts.
7.9 Less Pollutant Movement to Surface
2 2 4
The syngas produced by UCG contains a mixture of CO ¸CO, H , CH , water and traces of
2 3
pollutants such as H S, HCN, NH and other gases [Creedy, Garner et al. 2001; Burton,
Friedmann et al. 2006]. The composition of raw product gas is similar to that produced by
surface gasifiers, and cleaning technology for such gas compositions is already available
[Creedy, Garner et al. 2001]. Generally, sulfur and nitrogen report to the surface with the
gas, whereas ash and most heavy metals remain in the cavity [Fergusson 2009]. The
x x
process eliminates production of some criteria pollutants (e.g., SO , NO ) and reduces the
volume of mercury, particulates and sulfur species production, which makes the handling
of pollutants easier [Burton, Friedmann et al. 2006]. The decreased pollutant production
and movement reduces the cost of waste treatment and handling. The reduced volume of
waste at the surface also decreases GHG emissions from the waste spoils and reduces some
other environmental effects like acid mine drainage, generally caused by action of surface
water on waste piles.
135 |
Virginia Tech | Chapter 7 – GHG Reduction Potential
7.10 Chapter Conclusions
In the today’s era of growing energy demand and increased concern about environmental
issues, the importance of technologies that can provide economic and environmentally
friendly energy resources is inevitable. These energy demands and environmental
concerns require an energy mix from all available resources, including coal, petroleum,
natural gas, renewables, nuclear and solar. No single resource, either renewable or
nonrenewable, can fulfill both the energy demand and environmental sustainability
without some compromise. As an example, UCG with electricity generation may likely
result in GHG emissions 25% lower than conventional coal electricity generation, but 75%
higher than natural gas electricity generation [Moorhouse, Huot et al. 2010] . However, a
recent study from Cornell finds that natural gas from fracking could be 'dirtier' than coal, as
fracking , venting and leaks would release 3.6% to 7.9% of methane over the life time of the
well, which represent a methane emissions at least 30% more and perhaps more than
twice as great as those from conventional gas [Howarth, Santoro et al. 2011; Shackford
2011]. This means all the resources are needed to be developed with the emphasis on
development of technologies that can harness energy from these resources in an economic
and environmentally friendly manner without discarding/discrediting any option. UCG in
integration with CCS provides such an option to develop coal deposits for cheaper, cleaner
2
energy sources because capturing the CO stream is easier, doesn’t require the same capital
investments as other technologies, and provides a potential of GHG reduction.
136 |
Virginia Tech | Chapter 8 - Comparing Life Cycle Greenhouse Emissions from
Coal and UCG Power Generation
8.1 Introduction
Coal is the most abundant fossil fuel worldwide, with about one trillion tonnes in reserves,
sufficient for about 150 years at the current production rates. Coal demand as an energy
resource is increasing and will continue to increase for the next ten years and then stabilize
at a level around 17% higher than the 2010 level [IEA. 2011]. There is a projected increase
of 20% in global coal production between 2009 and 2035 with 90% of the projected energy
demand coming for non-OECD economies. Coal is the second largest primary fuel used in
the world and the backbone of electricity generation [IEA. 2011] .
In the U.S., coal is also a major energy source and more than 25% of world’s recoverable
coal reserves are in the U.S. The U.S. uses around 1.1 billion tons of coal per year. In 2010,
the U.S. produced 932 million tonnes of hard coal and 63 million tonnes of brown coal [IEA.
2011]. As of January 1, 2011, the DRB (demonstrated reserve base) for the U.S. was
estimated to contain 485 billion short tons [EIA. 2012b]. Of the estimated recoverable coal
reserves in the world, the U.S. holds the largest share (27%), followed by Russia (17%),
China (13%), and Australia (9%) [DoS. 2010].
However, the U.S. electric power sector’s historical reliance on coal-fired power plants has
begun to decline. Though coal still remains the dominant energy source for electricity
generation, its share of total generation is expected to decline from 45% in 2010 to 39% in
2035. The main reasons for this decline are slow growth in electricity demand, continued
competition from natural gas and renewable plants, and the need to comply with new
environmental regulations [EIA. 2012a]. As estimated by the U.S. EIA, total coal
consumption—including the portion of CTL (coal to liquid) consumed as liquids— will
increase from 20.8 quadrillion Btu (1,051 million short tons) in 2010 to 22.1 quadrillion
Btu (1,155 million short tons) in 2035, with 2012 as a reference. However, coal
consumption, mostly for electric power generation, will fall off through 2015 because of the
replacement of the coal-fired power generation with alternate sources. After 2015, coal-
137 |
Virginia Tech | Chapter 8 – LCA of UCG
fired generation increases slowly as the remaining plants are used more intensively [EIA.
2012a].
Electricity generation currently accounts for 93% of total U.S. coal consumption [EIA.
2012a]. Coal, the fuel most frequently used for power generation and supplying over 48%
of the total electricity generated in the United States, also has the highest emissions of
2
carbon dioxide (CO ) per unit of energy [DoS. 2010]. Electricity generators consumed 36%
2
of U.S. energy from fossil fuels and emitted 42% of the CO from fossil fuel combustion in
2007 and in 2010 electricity generation from coal was the largest emitter of GHGs with coal
2
combustion for electricity accounting for 1827.3 Tg CO equivalent [EPA. 2012]. Coal
mining, transportation, washing and disposal pose a risk to human health and coal
combustion emissions may damage the respiratory, cardiovascular and nervous systems
[Lockwood, Welker-Hood et al. 2009].
The importance of coal in the future energy mix, its potential environmental impacts,
difficult mining conditions, stringent environmental regulations, strong competition from
other energy sources and depletion of most accessible and low cost reserves have made it
imperative to explore for economic and environmentally friendly alternatives to traditional
coal mining and utilization technologies [Hyder, Ripepi et al. 2011]. One such promising
technology is UCG.
UCG is an alternative to conventional coal mining and involves insitu burning and
conversion of coal into a gaseous product. The gaseous product or syngas is largely
4 2 2
composed of CH , H , CO and CO with some trace gases and its calorific value ranges
3
between 850 to 1200 kcal/Nm [Ghose and Paul 2007]. The composition and calorific value
of syngas depends upon the specific site conditions and type of oxidant (air, stream or
3
oxygen), with typical calorific value (4.0-5.5 MJ/m ) of air-injected syngas doubling with
injection of oxygen instead of air [Walker 1999].
UCG has great economic and environmental benefits when compared to conventional coal
mining, surface gasification processes and even coalbed methane drainage procedures
[Meany and Maynard 2009]. In the gasification process ash and heavy minerals remain
underground and do not report to surface [Fergusson 2009], thus resulting in decreased
138 |
Virginia Tech | Chapter 8 – LCA of UCG
waste management cost and related infrastructure. The gasification process requires
certain amount of water to facilitate the chemical reaction [Ag Mohamed, Batto et al. 2011],
which results in minimal mine water recovery. Requirement of smaller surface area and
reduced surface hazard liabilities after abandonment add to the environmental edge of this
method over other coal exploiting technologies [Creedy, Garner et al. 2001]. The
elimination of conventional mining greatly reduces the environmental problems associated
with dirt handling and disposal, coal washing and fines disposal, coal stocking and
transportation, thereby resulting in a smaller surface footprint [Creedy, Garner et al. 2001].
During the burning process, UCG not only consumes the coal in the strata but also the
entrapped coalbed methane present in the strata. This gives an added advantage to UCG
over other coal exploitation methods, where entrapped methane has to be drained either
through ventilation system or through venting in the atmosphere [EPA. 2010]. As reported
by EPA, the methane emissions from natural gas systems were 6.6 Tg in 2010 and for coal
mines the figure was 4.9 Tg [EPA. 1999a].
Like all other technologies, UCG possesses some environmental risks, if the operations are
not managed adequately. Major environmental concerns of this technology are ground
water contamination and surface subsidence. In the gasification process a number of
organic and inorganic compounds including phenols, polycyclic aromatic hydrocarbons,
benzene, ammonia, sulfides, carbon dioxide and carbon monoxide are generated that can
migrate out of the reaction zone and contaminate the surrounding water [Burton,
Friedmann et al. 2006]. The cavities created by UCG resemble long wall panels and result in
the unsupported rocks and strata overlying the cavity. This unsupported mass will
gradually settle or subside and the effect can reach the surface depending upon the size of
cavity, type of strata, depth of coalbed and strength of surrounding rocks. The impacts of
subsidence include damage to surface structures and facilities like roads, pipes and
buildings, loss of agricultural land through formation of surface fissures, changes in ground
slope and surface drainage and hydrological changes including changes in water quantity,
quality and flow paths [Blodgett and Kuipers 2002].
The syngas produced by UCG contains a component of vaporized or produced water that
may contain residual hydrocarbons, benzenes, phenols and polycyclic hydrocarbons
139 |
Virginia Tech | Chapter 8 – LCA of UCG
[Moorhouse, Huot et al. 2010]. These water vapors need to be removed before combusting
the gas in a power plant. If mixed with surface water streams and channels, this water has
the potential to contaminate them. These water vapors are however, fully treatable and
industries have been treating these products for about 60 years [Moorhouse, Huot et al.
2010].
The atmospheric emissions from the UCG process include emissions during the process and
emissions during the transport and use of syngas. Combustion of product gas and transport
to other location produces harmful pollutants, however the actual UCG process itself does
not contribute criteria pollutants to the atmosphere [Ag Mohamed, Batto et al. 2011]. The
4 2 2 2 2 3
main emissions from UCG include CH , CO , CO, H , S, organic N , H S and NH ; however, the
pollutants can be separated from the product gas using proven technologies like cyclones,
bag-house filters and electrostatic precipitators [Creedy, Garner et al. 2001; Ray, Panigrahi
et al. 2010].
The potential environmental advantages and possible impacts of UCG theoretically
establish this technology environmentally superior to other coal exploiting technologies;
however, a detailed quantitative analysis in terms of environmental impacts can ascertain
these environmental superiority claims. This can be achieved through Life Cycle
Assessment (LCA) of competitive technologies. LCA is a tool used for assessment of
potential environmental impacts and resources used throughout the life cycle of a product
including raw material acquisition, production, use and final waste management phase
including both disposal or recycling [Finnveden, Hauschild et al. 2009]. The term product
includes both goods and services. The LCA helps in quantifying the impacts of a product or
service on different environmental categories including resource utilization, human health
and natural ecological systems and assists in identifying the opportunities to improve
environmental impacts of a product during its life cycle, better strategic planning and
product marketing through quantification of different impacts [ISO. 2006].
In this chapter, the life cycle of UCG from gasification to utilization for electricity generation
is analyzed and compared with the coal extraction through conventional coal mining and
utilization of coal in power plants. The comparison of life cycle GHG emissions of coal
140 |
Virginia Tech | Chapter 8 – LCA of UCG
mining and gasification and power generation through conventional pulverized coal fired
power plants, supercritical coal fired (SCPC) power plants and integrated gasification
combined cycle plants for coal (Coal-IGCC) and UCG (UCG-IGCC) is made. The results of this
analysis and comparison of various impacts are discussed in this chapter.
8.2 Methodology
A series of international standards guide the LCA practices. These standards are published
under the umbrella of ISO-14040 series and provide basic guidelines for conducting LCA. In
addition to these standards, there are number of practical guidelines and professional
codes developed to assist in conducting LCA such as SETAC code of practice, guidelines for
environmental LCA from the Netherlands (CML/NOH 1992), the Nordic countries (Nord
1995), Denmark (EDIP 1997) and the U.S. (US-EPA 1993) [Baumann and Tillman 2004].
This chapter follows the guidelines of ISO 14040 series.
LCA has generally four steps including goal and scope definition, inventory analysis,
impacts assessment and interpretation termed as improvement assessment by some
practitioners [DEAT. 2004]. The life cycle assessment includes all the technical systems,
operations, processes, inputs and outputs of natural resources, energy, waste, emissions
and transportation required for raw material extraction, production, use and after use of
the products [DEAT. 2004]. The phases of LCA are iterative and repetitive as depicted by
the model of LCA phases in ISO 14040, shown in Figure 8.1 below [ISO. 2006].
141 |
Virginia Tech | Chapter 8 – LCA of UCG
8.3 Goal and Scope Definition
The goal of this study is to compute life cycle greenhouse gas emissions from electricity
generation using coal as primary source, through the following coal based generation
alte•r natives
Conventional coal fired generation through pulverized coal combustion (PCC)
plants, that represent average emissions and generation efficiency of currently
•
operating coal fired power plants
Generation through supercritical pulverized coal fired (SCPC) power plants,
•
representing advanced technology at increased efficiency
Generation through integrated gasification combined cycle (IGCC) coal fired power
•
plant i.e. Coal-IGCC, and
Generation through integrated gasification combined cycle (IGCC) plant using
syngas derived from underground coal gasification, i.e. UCG-IGCC
Six main gases categorized as greenhouse gases or GHGs as per Kyoto Protocol, include
2 4 2
carbon dioxide CO , methane CH , nitrous oxide N O, hydro fluorocarbons HFCs, Per
fluorocarbons PFCs and Sulfur hexafluoride SF6 [United Nations 1998]. Out of these six
2 4 2
GHGs, the emissions of only three (CO , CH , and N O) are quantified in this LCA as
6
emissions of other three GHGs (SF , PFCs, HFCs) are comparatively negligible in the
processes of raw material extraction, electric energy generation, fuel combustion and
fugitive losses [PACE 2009].
8.4 Functions and Functional Unit
UCG can be utilized for various purposes including power and electricity generation,
hydrogen production, iron reduction, and a chemical feedstock for a variety of chemical
products like ethylene, acetic acid, polyolefin, methanol, petrol and synthetic natural gas
[Anon 1977; Burton, Friedmann et al. 2006; Yang, Zhang et al. 2008; Courtney 2009; Zorya,
JSC Gazprom et al. 2009]. Similarly, coal has various uses including electricity generation,
steel production, cement manufacturing and as a liquid fuel [WCA. 2011]. However, to
provide a common basis for comparing greenhouse emissions from each system, only the
electricity generation is analyzed for each system.
143 |
Virginia Tech | Chapter 8 – LCA of UCG
The functional unit measures the performance of functional outputs of the systems by
providing a reference to which the inputs and outputs are related and is a quantitative
description of the performance of the system(s) in the study [Rebitzer, Ekvall et al. 2004;
ISO. 2006]. In this study, the objective is to analyze the amount of greenhouse gas
emissions produced by each system; therefore, the functional unit is amount of carbon
2
dioxide equivalent produced per megawatt hour of electricity generation or kgCO e/MWh.
This functional unit provides a common base for comparing the systems under study.
2
Carbon dioxide equivalency or CO e for different gases is based on the Global Warming
Potential (GWP) of these gases. GWP is a relative measure of the amount of heat trapped by
a certain mass/volume of a gas compared to the amount of heat trapped by the same
mass/volume of carbon dioxide over a discrete time interval [Fulton, Mellquits et al. 2011].
The time interval is generally 20, 100 or 500 years. The Intergovernmental Panel on
Climate Change (IPCC) in 2007 estimated the GWP for methane to be 25 times greater than
2 2
that of CO over a 100-year timeframe and 72 times greater than that of CO over a 20-year
2
timeframe, whereas for nitrous oxide N O, these values are 289 for 100-year and 298 for
20-year timeframe [Forster, V. Ramaswamy et al. 2007]. There is a highly polarized debate
over the use of 20-year or 100-year timeframe and which source of GWP factors be applied
especially in the case of methane [Hughes 2011]. For example, Shindell et al. have
estimated the GWP values for methane to be 33 and 105 for 100-year and 20-year
x
timeframes respectively and -560 for NO over a 20-year timeframe, based on calculations
including interactions between oxides and aerosols, thus giving a substantial net cooling to
NOx emissions [Shindell, Faluvegi et al. 2009]. Howarth et al. prefer the use of estimates by
Shindell et al. in the calculation of GHG emissions of shale gas production in the U.S.
[Howarth, Santoro et al. 2011]. However, the proponents of natural gas generally decline
both the use of 20-year time frame and the use of higher GWP values [Hughes 2011].
For this analysis, the GWP values estimated by IPCC in 2007 for 100-year timeframe i.e. 25
2
for methane and 289 for N O are used [Forster, V. Ramaswamy et al. 2007].
144 |
Virginia Tech | Chapter 8 – LCA of UCG
car carrying 115.3 tons and the average length of haul reaching 836 miles in 2009
[Association of American Railroads 2011].
In this study, coal transportation through railroad, trucks and barges is considered. The
rails accounted for 75% of coal transportation while trucks and barges accounted for 15%
and 10% respectively.
The average haul-distance for delivery of coal to the power plant is 836 miles, the average
length of haul for the U.S. class-I freight rail transporting coal [Association of American
Railroads 2011]. This distance comes to be 1632 miles for a round trip. The rail has 100
cars and 2 locomotives and delivers about 11,600 tons per trip, the U.S. average value is
64.2 tons per car load for class-I freight rail services in 2009 [DOT. 2011b], however, a
typical coal train is 100 to 120 cars long with each hopper holding 100 to 115 tons of coal
[University of Wyoming 2001]. Average diesel fuel consumption by the train is 0.14 miles
per gallon [DOT. 2011a]. In 2011, U.S. freight railroads moved a ton of freight at an average
of 469 miles per gallon of fuel [Association of American Railroads 2012].
Trucks transport 15% of coal to the power plant. The average payload of truck is 25 tons
and the average fuel economy is 6.1 miles per gallon [Federal Railroad Administration
2009]. The truck travels a total distance of 200 miles round trip for coal delivery.
8.10 Gas Transportation
For gas transportation, the distribution network for natural gas is assumed, as there is no
gas pipeline available solely for UCG. It is assumed that gas is transported through a
distance of 300 mile via long distance natural gas pipeline. The emissions associated with
the transportation of gas are those in the database of SimaPro software for the long
distance, natural gas pipeline. The data includes emissions and energy requirement for the
transport of average natural gas in the long distance gas transportation network using
average compressor station. The data for emissions is from 1994 and for energy
requirements is from 2001. Although this data is not completely representative of
transport for UCG, this however gives a reasonable estimate for energy requirements and
emissions.
150 |
Virginia Tech | Chapter 8 – LCA of UCG
8.11 Sources of Data Acquisition
This chapter used several sources for data including journal articles, government
documents, published reports, conference papers, websites, government and other
agencies databases and in build database of the SimaPro software.
For the coal component of this study, there are several excellent reports and papers dealing
with the life cycle emissions of power generation from coal and provide an excellent source
of data. The majority of these life cycle studies, compare the coal and natural gas power
generation systems [Spath, Mann et al. 1999; Ruether, Ramezan et al. 2004; Jaramillo,
Griffin et al. 2005; Jaramillo 2007; Jaramillo, Griffin et al. 2007; Dones, Bauer et al. 2008;
PACE 2009; DiPietro 2010; Draucker, Bhander et al. 2010; Reddy 2010; Donnelly, Carias et
al. 2011; Fulton, Mellquits et al. 2011; George, Alvarez et al. 2011; Hughes 2011; McIntyre,
Berg et al. 2011; Skone 2011].
The government databases, reports and websites that provide useful data for this analysis
include the U.S. Department of State, Department of Energy, Department of Transportation,
National Energy Technology Laboratory, Environmental Protection Agency, Energy
Information Administration, International Energy Agency and several others.
For the fugitive methane emissions, EPA provides very useful data for both coal mining and
gasification processes [EPA. 2012].
For UCG, major data source is the Chinchilla project in Australia. Several papers, reports
and evaluations provide data for this project. The gas transportation data is used from the
built-in database of SimaPro.
8.12 Data Accuracy and Limitations
Since several sources are used for data collection, to ascertain the level of data accuracy is
very difficult. The data collected from different reports, studies, databases and websites has
varying levels of accuracy. The government databases provide reasonably accurate data
and whenever was possible, were the primary sources of data. The peer reviewed papers
and government reports are given preference for data collection. The database provided
with the SimaPro software provides a good source for relatively accurate data. Careful
151 |
Virginia Tech | Chapter 8 – LCA of UCG
consideration is given to get most accurate, representative and latest data. However, where
accurate and up-to-date data is not available from primary sources, then second most
relevant and accurate data source is used relaxing the geographical constraints. For
example, in case of gasification, the accurate and up-to-date data for UCG projects in the
U.S. is not available; therefore, data available for the Chinchilla project (the latest available
source of UCG data) is used. Thus, the comparison of coal production and utilization in the
U.S. power plants to the gas production and utilization in the Australia, though not very
accurate and rational in the strict sense of geography and data consistency, provides a
tolerable basis for analysis, without any hard conclusions.
The inherent data source uncertainties and variations in accuracy levels especially in case
of UCG dictate that no strong conclusion are drawn from this analysis for small differences
in the life cycle emissions. The results reported here are not for commercial utilizations or
ecological claims. They provide the basic comparison for relative GHG impacts of different
technologies and highlight the impacts of different stages for improvement in the
methodology and technological alternatives.
8.13 Models
F8o.1ll3o.w1 iPnugl fvoeurri zceadse Cso aarle Cmoomdbeluesdt iionn th (eP CSiCm) aPPlraon tfos r analysis.
The pulverized coal combustion system is the basic method for thermal power generation.
In this method, coal is first ground to very fine powder and this powder is then ignited to
produce energy. This energy is then utilized to generate steam that runs the large turbines
for electricity generation. The average plant consists of pulverized coal boilers, baghouse
filter, flue gas cleanup system, heat recovery steam generators and steam turbines [Spath,
x
Mann et al. 1999]. NO emission and unburned carbon are most problematic pollutants for
this system [Kurose, Makino et al. 2003]. Figure 8.4 shows the general processes involved
in the life cycle of a coal-fired power plant.
152 |
Virginia Tech | Chapter 8 – LCA of UCG
8.13.3 Integrated Gasification Combined Cycle (Coal-IGCC) Plants
In the IGCC plants, coal is first converted into a gaseous product through a surface gasifier.
This gas is then purified and combusted for electricity generation in a combined cycle
x x
turbine. Gas cleaning allows removing the SO and NO impurities thus reducing their
emissions load. Waste heat from the turbine is used to drive a steam turbine through a
combined cycle system. The combined cycle improves the overall efficiency of the system.
Typical efficiencies for IGCC are in the mid 40’s, however efficiencies around 50% are
achievable [WCA. 2012]. For this analysis, a higher efficiency for the IGCC plant is used so
that the comparison can be made between the efficient IGCC plants and UCG-IGCC. Table
8.3 shows the data used for coal IGCC plant.
Table 8.3: Data used for Coal-IGCC plant
Data for Coal-IGCC Plant
Calorific value of coal 26.4 MJ/kg
Plant efficiency 42%
Plant Capacity 425 MW
Operating capacity factor 60%
Coal haul losses 5%
MJ/kg 238.8 kcal/kg
Coal requirement 761,632 tons /year
Rail transport distance 836 miles
Truck transport distance 200 miles
Barge distance 250 miles
Rail load 571,225 ton/year
Truck load 76,163 ton/year
Barge load 114,245 ton/year
159 |
Virginia Tech | Chapter 8 – LCA of UCG
of GHG emissions. The byproducts generated during this phase can be utilized for
production of other chemicals but are not included or credited in this analysis; because
such data has not been included about other power generation options. More than 90% of
the emissions are from electricity generation in the power plants. Although, there is a great
advancement in the technologies that curtail the GHG emission from power plants, there is
a continued need of research in this area.
Table 8.6 also shows that the emerging or latest technologies have considerable
achievements in reducing the GHG emissions in almost every aspect of electricity
generation life cycle. UCG is very comparable to these latest technologies and in fact, the
GHG emissions from UCG are about 28% less than the conventional PCC plant. When
combined with the economic superiority, UCG has a clear advantage over competing
technologies. Figure 8.10 shows the percent reduction in the GHG emissions when taking
PCC as a base case. The comparison shows that there is considerable reduction in the GHG
emissions with the development of technology and improvements in generation
efficiencies.
166 |
Virginia Tech | Chapter 8 – LCA of UCG
Figure 8.10: Comparison of percent GHG emissions with PCC as base case
Figure 8.11 shows the total life cycle GHG emissions for different generation technologies.
The coal and UCG IGCC are almost equal in total GHG emissions; however, for this analysis
higher efficiency for coal-IGCC was used. No carbon capture was taken into account for any
technology. Carbon capture though reduces carbon emission from combustion of syngas,
decrease the efficiency of IGCC plants. Figure 8.12 shows the contribution by different
components of life cycle in the total GHG emissions. The emissions are presented in
2
kgCO equivalent/MWh of electricity generation. Electricity generation is the major
contributor in the total GHG emissions load of life cycle. Figures 8.13 to 8.16 show the
contribution of various substances in the total GHG load for 1MWh electricity generation in
2
each plant. It is clear from these figures that major contributor for GHG is CO itself,
followed by methane. Other GHG gases are in trace amounts. The method used for this
characterization is based on 2007 IPCC estimates for global warming potential of GHGs
based on a 100-year timeframe.
167 |
Virginia Tech | Chapter 8 – LCA of UCG
8.16 Chapter Conclusions
Because of some uncertainties in data, variability in the sources of data and the fact that
data availability is currently limited for commercial applications of UCG, it is difficult to
derive hard conclusions especially when the differences for the life cycles are relatively
small. However, this analysis provides a clear picture of the impacts of various technologies
and helps in highlighting the areas for improvement of process or processes. This analysis
also highlights the fact that improvements in the technologies to reduce the life cycle
emissions from coal generation and utilization are fetching good results. The reductions in
GHG emissions are about 30% to 40% lesser from the latest plants (both IGCC and Ultra
Supercritical pulverized combustion) than conventional PCC plants. UCG is competitive
with the latest technologies and has distinct environmental advantages. This analysis
shows that UCG has a distinctive place when comparing the technologies for coal resources
development based on environmental performance. This technology results in the
reduction of greenhouse emissions load of coal’s life cycle and provides opportunities for
development of coal resources in an environmentally friendly and sustainable manner.
176 |
Virginia Tech | Chapter 9 - Research Conclusions and Future Work
9.1 Research Conclusions
Underground coal gasification has the potential to harness energy from low grade, deep
seated, steeply inclined and thin coal seams in an economic, environmentally friendly and
sustainable manner. This technology can be applied to abandoned coal mines, remnants of
exploited reserves and deposits considered uneconomic and technically difficult for
conventional mining methods. Commercial utilization of this technology can help in
increasing the recoverable coal reserves and reducing the environmental impacts of mining
coal. This technology, in addition to other advanced technologies, can promote the future of
clean coal and help in sustaining the coal mining industry.
In this study, the operational parameters of UCG technology were analyzed to determine
their significance and to evaluate the effective range of values for proper control of the
process. The study indicates that cavity pressures, gas and water flow rates, development
of linkage between wells, and continuous monitoring are the most important operating
parameters. The availability of sophisticated equipment, the latest machinery and
advancements in drilling technology have helped in overcoming the problems of linkage
development between process wells, drilling in-seam horizontal wells of required size and
configuration, and control of flow rates and gas pressures. State-of the-art monitoring
equipment, very accurate and reliable software and dependable online systems have made
it possible to extensively monitor, even remotely, the cavity growth, gas flows and
pressures, gas quality, and environmental parameters such as water quality, inflow and
outflow of contaminants.
The selection of suitable sites for UCG projects was also researched in this study. Past
experiments and pilot studies suggest that proper site selection is one of the most
important factors in the failure or success of UCG projects. Therefore, site selection criteria
are developed in this research based on successes and failures of previous experiments and
pilots. The criteria take into account the site characteristics, coal quality parameters,
177 |
Virginia Tech | Chapter 9- Research Conclusions
hydrology of the area, availability of infrastructure and regulatory and environmental
restrictions on sites. These criteria highlight the merits and demerits of the selected
parameters, their importance in site selection and their economic and environmental
potentials.
Based on the site selection criteria developed in this research, a GIS model was developed
to assist in selecting suitable sites for gasification in any given area of interest. The GIS
model is a very helpful tool for selecting suitable sites. This GIS model can be used as a
decision support tool as well since it helps in establishing the tradeoff levels between
factors, ranking and scaling of factors, and, most importantly, evaluating inherent risks
associated with each decision set. The complete procedure for use and development of this
model is explained in detail so that anyone interested in the application of this model will
find no difficulty in understanding the various steps involved.
The potential of UCG to conform to different frameworks defined to assess the capability
and potential of any project that merits the label, “sustainable,” has been evaluated in this
research. It has been established that UCG can integrate economic activity with ecosystem
integrity, respect for the rights of future generations to the use of resources and the
attainment of sustainable and equitable social and economic benefits. The important
aspects of UCG that need to be considered for its sustainable development are highlighted.
The environmental benefits of UCG have been evaluated in terms of its potential for
reduction in greenhouse gas (GHG) emissions. The findings indicate that UCG significantly
reduces GHG emissions compared to other competitive coal exploiting technologies. In this
research, a model to compute the life cycle greenhouse emissions of UCG has been
developed, and it reveals that UCG has distinctive advantages in terms of GHG emissions
over other technologies and competes favorably with the latest power generation
technologies. In addition to GHG emissions, the environmental impacts of these
technologies based on various impact assessment indicators are assessed to determine the
position of UCG in the technology mix. It is clear from the analysis that UCG has prominent
environmental advantages and has the potential to develop and utilize coal resources in an
environmentally friendly and economically sound manner. However, a dedicated effort
178 |
Virginia Tech | Chapter 9- Research Conclusions
requiring both government and the private sector to promote further research and
development for this technology is needed to establish its commercial potential, especially
in the U.S.
9.2 Future Research
Several aspects of UCG need research before its commercialization; however, during the
course of this research, two areas for further exploration came into focus: the synergy of
UCG and Coalbed Methane (CBM) modules and the application of UCG to gasify multiple
seams using the same wells.
CBM is extracted through a network of wells that can be used for gasification of coal seams
especially after the wells have been abandoned. The coal seam in the area of CBM wells is
generally highly fractured because of the application of hydrofracturing for enhancing
methane drainage. This enhanced fracturing can help in creating linkage between process
wells. However, the problem of gas flow and contaminant dissemination requires further
study. In addition, the control of cavity pressures, cavity development and the inflow of
water can be challenging. The consideration of existing infrastructure of wells, however,
can reduce capital costs greatly and lead to a more competitive cost of product gas. This
will require extensive research to assess strata conditions, coal properties and stress
regimes in the area, and extensive field experimentation and pilot scale studies to
determine the economic and operational viability of this proposal are needed. The GIS
model developed in this research will be a helpful tool when selecting sites considering the
existing well structures.
Further research is also required to gasify multiple coalbed seams in the same area using
the same well infrastructure. In this case, the flow of gas and well infrastructure needs to
be controlled in such a way that injected gases reach all the target seams and product gases
are collected at the production wells from each seam. However, field experimentation and
pilot scale studies are needed. The schematic of the concept is shown in the Figure 9.1
below.
179 |
Virginia Tech | Application of Background Oriented Schlieren
(BOS) in Underground Mine Ventilation
Edmund Chime Jong
Abstract
The schlieren technique describes an optical analysis method designed to enhance
light distortions caused by air movement. The ability to visualize gas flows has
significant implications for analyzing underground mine ventilation systems. Currently,
the widely utilized traditional schlieren methods are impractical underground due to
complex equipment and design requirements. Background oriented schlieren (BOS)
provides a solution to this problem. BOS requires two primary components, a
professional quality digital camera and a schlieren background. A schlieren background
is composed of a varying contrast repetitive pattern, such as black and white stripes or
dots. This background allows the camera‟s sensor to capture the minor light diffractions
that are caused by transparent inhomogeneous gases through image correlation. This
paper investigates a possible means of mitigating some of the major problems associated
with surveying underground mine ventilation systems with the BOS method.
BOS is an imaging technique first introduced in 1999 that allows the visualization
of flowing inhomogeneous transparent media. In ventilation surveys, BOS can be used to
attain qualitative data about airflows in complex areas and methane emissions from coal.
The acquisition of such data would not only enhance the understanding of mine
ventilation but also improve the accuracy of ventilation surveys. As an example, surveys
can benefit from small scale BOS investigations around fans, regulators, overcasts, and
critical junctions to identify effective data gathering positions. Regular inspections of
controls and methane monitoring points could also be improved by the systematic nature
of BOS.
Computer programs could process images of each location identically regardless
of quantity. BOS can then serve as a check to identify items that were overlooked during
the routine inspection. Despite the potential of BOS for ventilation analysis, several |
Virginia Tech | limitations still exist. These issues are sensitivity threshold and quantification of flow
data. This paper specifically examines the qualitative potential of the BOS technique for
imaging various underground ventilation flows and outlines initial experimental efforts
used for the evaluation.
Three primary experiments were conducted to evaluate BOS as a potential
qualitative analysis technique for underground mine ventilation. The first experiment
used BOS to image of flow induced by an axial vane fan and an axial flow fan using an
artificial background and an imitation rock background. This experiment showed that the
BOS system was unable to image isothermal airflow from either fan. Heated airflow
could be visualized with both fans using the artificial striped background but not with the
imitation rock background. The BOS system lacked the sensitivity necessary to image
isothermal airflow from the two fans. The focus of the overall BOS study was changed
to explore higher pressure airflows through a regulator.
The second experiment used BOS to image flow through a regulator induced by
an axial flow fan using an artificial striped background. The BOS images were compared
to ones produced by a traditional schlieren single mirror systems for validation of the
BOS experimental design. This experiment was unable to image isothermal airflow
through the regulator from either system. However, heated airflow could be visualized
by both systems. The BOS and traditional schlieren systems used in this experiment
lacked the sensitivity necessary to image isothermal airflow through a regulator.
However, the BOS procedures were successfully validated by the ability of both the BOS
and traditional schlieren systems to image heated airflows. The focus of the study was
changed to explore methane gas emissions.
Numerous mining industry techniques already exist to quantify methane content.
However, methane content is different from the actual methane emission rate of exposed
coal. Emission rates have been modeled using numerical simulation techniques, but the
complexity of the methane migration mechanism still requires physical data to achieve
higher accuracy. The third experiment investigated the feasibility of using the BOS
technique for imaging methane flow by imaging methane emission from a porous
medium. Laboratory grade methane was directly injected into a Brea sandstone core
sample using a flexible tube.
iii |
Virginia Tech | The BOS system was successfully able to image methane desorption in this study.
A repeating pattern consisting of alternating black and white stripes served as the
schlieren background for the Nikon D700 camera. The ability to image methane
emission even at low injection pressures (i.e. 20 psi) demonstrates that actual methane
desorption from coal can potentially be imaged. This result can only be conjectured
because of a lack of research in the area of methane emission. Despite this issue, the
experimental results suggest that BOS can be feasibly utilized to image methane
emissions from coal in an underground mine.
The results of the three experiment demonstrated that the potential for large scale
implementation of BOS in underground mines does exist. Qualitative BOS information
has the potential in the practical sense to optimize the procedures of ventilation surveys
and design of ventilation monitoring equipment. For example, images of methane flow
in active mining areas can be used to optimize the positioning of auxiliary ventilation
equipment to dilute known areas of high methane concentration. BOS images could also
be used to re-evaluate the placement of methane monitors on mining equipment to better
facilitate the detection of dangerous methane concentrations in active mining areas. For
these reasons, further investigation into the BOS technique for use in imaging
underground airflows with differential temperatures and methane emissions in
underground coal mines is suggested as an addendum to this study.
iv |
Virginia Tech | Chapter 1: Introduction
Mine ventilation is an essential element of underground mining. Ventilation
systems provide fresh air to workers, carry harmful gases out of the mine, and assist in
dust suppression. The ventilation system must be maintained in optimal running order to
achieve these tasks. Ventilation systems are kept to such a degree by maintaining air
ways, ventilation fans, and ventilation controls. Ventilation fans include main mine fans
as well as booster fans. Ventilation controls include stoppings, which separate and guide
airflow, regulators, curtain, and ducting, which split airflow in a controlled manner [1].
These components provide the necessary amount of ventilation to all areas of the mine.
Successful ventilation is achieved with the cohesive functioning of fans and controls
operating at design specifications. If a single element malfunctions, the effectiveness of
the ventilation system can be drastically affected. Design and placement of mine
ventilation systems are mostly determined and monitored from ventilation survey data.
Mine ventilation surveys involve the collection of data in key areas of the mine.
These surveys are designed to check air velocities, air quantities, pressures, and other
such characteristic data. Once complete, survey data are used to generate and validate a
ventilation model of the mine [1]. These models are used to evaluate the effectiveness of
the mine ventilation system, plan for mine expansion, and prepare for future ventilation
changes. The model‟s degree of accuracy depends on the quality of the survey data.
Unfortunately, fully representative data are difficult to achieve due to the complexity of
underground airflow patterns. For example, when data is gathered in intricate ventilation
branches, such as at longwall tailgate T-splits, the position at which the data is gathered
becomes significant due to the variability in velocities across the branch‟s cross-section.
In addition, the dynamic nature of mines, including geologic conditions,
equipment operations, personnel movements, and atmospheric changes, creates other
sources of error in ventilation data. The improper sealing of an air lock door or even the
movement of a hoist may interfere with survey results. As a result, the design will reflect
these errors. Error reduction protocols already exist to minimize these problems, but as
the model size and the level of complexity grow, the influence of measurement errors
also increases. Such problems are also seen in other aspects of ventilation surveys.
1 |
Virginia Tech | Surveys assist in the regular maintenance of ventilation systems. Environmental
conditions, such as humidity, dust, ground movements, and water influx, stress the
integrity of ventilation controls. As metal corrodes and concrete degrades, ventilation
controls will inevitably fatigue and fail. Visual inspections and regular maintenance are
currently the most effective means against this problem. Once a minor fault is
discovered, such as a leak, the control can repaired before a failure occurs. However,
even with regular inspections, minor leaks can be missed due to the sheer volume of
items that must be examined. Although accurate ventilation data and maintenance
inspections are important, an effective ventilation system is not created with these items
alone.
One of the most important goals of mine ventilation is to carry harmful gases out
of the mine. Methane gas is especially a concern due to its explosive potential and
inherent presence in coal mines. Undisturbed coal deposits naturally create a pressure
equilibrium that prevents methane from escaping [2]. Methane can thus be indefinitely
contained within in-situ coal as long as the equilibrium exists. The advancement of
underground mine workings exposes coal to the atmosphere. The resulting pressure
gradient causes methane to be released, or to desorb, from the coal [3]. Regular surveys
are conducted in key areas to detect the accumulation of methane. Many different types
of monitoring equipment have been designed to measure methane concentrations for this
purpose. Despite modern advancements in handheld gas detectors, equipment mounted
monitors, and atmospheric monitoring systems, little is known about the qualitative
aspects of methane desorption. Does methane uniformly flow from exposed coal faces or
do certain areas have concentrated fissures? As coal is excavated by machinery, are there
locations where large pockets of methane desorb at once? Are there excavated surfaces
free of methane desorption? The ability to monitor methane, though much improved
from past techniques, is still hindered by these questions.
This paper investigates a possible means of mitigating some of the major
problems associated with underground mine ventilation surveys, the background-oriented
schlieren (BOS) method. BOS is an imaging technique first introduced in 1999 that
allows the visualization of flowing inhomogeneous transparent media [4]. In ventilation
surveys, BOS can be used to attain qualitative data about airflows in complex areas and
2 |
Virginia Tech | Chapter 2: Literature Review
In order to apply the background oriented schlieren (BOS) technique to the
analysis of underground mine ventilation systems, a theoretical understanding of BOS is
necessary. Ventilation systems are an essential part of underground mining as they bring
fresh air to active mining areas while simultaneously bringing harmful gases out of the
mine. However, maintaining such a system in a highly dynamic subterranean
environment requires constant data acquisition and atmospheric monitoring. This
continuous surveying of ventilation performance is vital to optimizing the overall system
and maintaining a safe working environment. Mine ventilation surveys are completed
using traditional tools such as vane anemometers, altimeters, differential pressure gauges,
and sling psychrometers. These tools are limited by the fact that they can only provide
quantitative glimpses into the target section of the ventilation system.
The BOS technique provides a possible means of expanding the data gathering
potential of ventilation surveys. BOS images allow more flexibility for exploring an
area. If a larger view is needed, the imaging system can be simply positioned to capture
the desired perspective. Although the advantages of BOS are apparent even with a
cursory understanding of BOS, this technique is not without its limitations. For example,
low pressure ventilation flows may be below the sensitivity threshold of the imaging
system. As a result, the BOS technique can only be successfully applied to underground
mine ventilation with an understanding of how light behaves and how BOS was
developed over time.
4 |
Virginia Tech | 2.1 Light
2.1.1 Properties and Behavior
Light is apparent in every aspect of the modern world from the shining of the sun
to the iridescent bulbs in the streets. Light allows everyday actions, objects, and scenes
to be visualized by the human eye. But what exactly is the phenomenon of light? This
question can actually be correctly answered twice. Two main theories about the nature of
light currently exist, the wave theory and the particle theory.
The particle theory is the first conception of how light existed. This theory
describes light as being composed of a collection of discrete elements commonly known
as photons [5]. These photons were believed since the time of the ancient Greeks to
travel through space in straight lines and rebound off any object into which the particles
collides. This theory was then later refuted by the introduction of the wave concept [6].
The wave theory describes light as a dual oscillating wave that travels through
space. This conception of light was first introduced by Christiaan Huygens in the 1600s
in opposition to the particle theory. In 1807, Thomas Young confirmed Huygens theory
by projecting light onto a tiny opening in a surface. The light projection expanded as it
exited through the opening. The exiting light waves were found to be subject to
interference from other sources of light. Young also projected light through a small slit
and onto a surface. He discovered that an interference pattern consisting of alternating
light and dark areas appeared. Such behavior would not be exhibited by particles.
Therefore, Young concluded that light was in fact a wave [7].
In the 1900s, Heinrich Hertz, J. J. Thompson, Philipp Lenard, and Albert Einstein
reignited the light debate. Hertz, Thompson, and Lenard performed the earliest
experiments on the photoelectric effect. Although their experiments were each unique
and independent, they used the same basic theory. The experiments showed that once
light was projected onto a surface, extraneous electrons were then emitted. From these
experiments, they demonstrated that electrons were emitted as a result of the impact of
the light [8]. Albert Einstein would later use quantum theory to explain that this behavior
could only be produced if light consisted of discrete quanta, or particles [9]. The validity
5 |
Virginia Tech | of both the wave and particle experiments has resulted in the modern day acceptance of
light as having both characteristics. This concept is commonly known as the dual nature
of light. For the purposes of this review, light will be referred to as a wave [10].
Light is not an ordinary wave. Waves generally require a medium through which
to propagate. Light waves, in contrast, do not require a medium, which allows them to
easily travel through a vacuum. Light travels as oscillating energy in the form of electric
and magnetic fields. These fields oscillate at right angles to the direction of movement
and are oriented at right angles to each other [7]. An example of a light wave can be seen
in Figure 2.1.
Figure 2.1. Light wave shown with oscillating electric and magnetic fields.
Despite this ability to travel without a medium and over great distances, light is
still subject to obstructions. As light comes in contact with different media, such as air
and water, it can react in three primary ways: absorb, reflect, and/or refract. Absorption,
as the name implies, describes how light is taken into a medium and is retained. As an
example, absorption can be seen when light comes in contact with a black colored
surface. This color absorbs all frequencies of light thus causing the appearance of black
[11]. Reflection describes the behavior of light as it impacts a medium that causes a
redirection of the wave‟s energy in the opposite direction of travel. As an example,
reflection can be seen as light is projected onto a mirror surface. The wave strikes the
6 |
Virginia Tech | surface of the mirror and is then redirected back toward the source of the light.
Refraction occurs when light waves enter a medium, such as water, through which travel
can continue. However, as light enters the new medium, its velocity is affected. The
velocity of light is dependent on the refractive index of each new medium through which
the wave travels. This change in velocity also results in a change of direction, or
bending, of the light wave [7]. For this review, refraction will be the main focus due to
its importance to the BOS technique. The mechanics of light refraction are discussed in
the next section.
2.1.2 Refraction Mechanics
As stated in Section 2.1.1, refraction describes how light waves bend in various
media due to a change in velocity caused by varying refractive indices. This behavior
occurs as light travels from one type of medium to another. As a light wave impacts the
boundary between the different types of media, the phase velocity is altered as a result.
This change in velocity also caused a change in travel direction if the wave does not
strike perpendicular to the medium boundary. This phenomenon is displayed in Figure
2.2.
Figure 2.2. Diagram of a light wave being refracted when traveling
through two different media.
7 |
Virginia Tech | Waves that strike perpendicular to the boundary will result in an alteration of their speed
but not their travel direction. As example of this lack of direction change can be seen in
Figure 2.3.
Figure 2.3. Diagram of a light wave traveling from left to right and
striking the boundary of two different media perpendicular to the
medium boundary.
Refraction is not a random phenomenon and can be represented in a mathematical
manner. The numerical rule that governs refraction was discovered by Willebrørd Snell
in 1621 but remained unpublished for most of his career. Snell‟s law was finally
mentioned by Christiaan Huygens in his treatise on light [12]. During that same century,
René Descartes was also able to independently derive the same mathematical relationship
discovered by Snell. In 1637, Descartes published the finding in his treatise entitled
Discourse on Method [13]. The independent discoveries made by Snell and Descartes
have allowed this mathematical law to be known by two names, Snell‟s law and
Descartes‟ law. For the purposes of this review, this law will be referred to as Snell‟s law
and follow his derivation of the law.
The basis of Snell‟s law was produced with the collection of experimental data in
the form of refraction angles as light traveled through air and water. The refraction
angles were measured from the normal to the interface between the two media. The data,
when graphed, were found to follow a sine wave trend. As a result, a set arithmetic
relationship was discovered. Snell found that the ratio between the sine of the refractive
8 |
Virginia Tech | angle in the first medium and the sine of the refractive angle in the second medium
always equaled the same dimensionless constant. This behavior is represented by
Equation 2.1 where “ ” is the angle of refraction in the first medium, which is also
known as the angle of incidence, and “ ” is the angle of refraction in the second
medium. The angle of incidence and the angle of refraction can be inputted in terms of
degrees or radians [10].
sin( )
1
Constant (2.1)
sin( )
2
The value of the constant will remain the same regardless of the angles of
incidence and the angles of refraction if the two media also remain unchanged.
Alternatively, the constant will vary as the combination of media is changed. As a result,
Snell concluded that an unknown property of each individual material was responsible for
creating the constant. Through more experimentation, Snell found that the constants of
two different pairs of media were interrelated when one medium was taken from each
pair and combined. This result is demonstrated by the following example. Consider four
media: A, B, C, and D. The unit less constant produced by medium-A and medium-B
can be defined as “ ”. The constant produced by medium-C and medium-D can be
defined as “ ”. If Equation 2.2 is applied to medium-A and to medium-C as a pair,
then the constant can be represented by the following equation [10].
(2.2)
Equation 2.2 demonstrates that the constant is dependent on a number that is
unique and stable from one medium to another. Snell termed this unique characteristic as
the index of refraction, which is represented by the dimensionless variable “n”. In order
to truly define the index of refraction, a basis for comparison would need to be created.
Snell established a vacuum as his comparison base by characterizing n = 1 in this
medium. He defined the ratio of the indices of refraction so that the medium with the
smallest angle of incidence or angle of refraction would have the larger index of
9 |
Virginia Tech | (2.4)
According to Snell‟s definition, the refractive index of a vacuum has a value of
one. The speed of light in a vacuum is defined by the variable “c”, which equates to
299,792,458 m/s [14]. If these values are substituted into Equation 2.4, the following
equation is produced.
(2.5)
The solution for “X” is now apparent in Equation 2.5. Solving for “X” gives .
When this value is substituted back into Equation 2.4, the refractive index can then be
defined as the ratio of the speed of light in a vacuum to the speed of light in the medium.
This relationship is displayed by the following equation where “v” is the speed of light in
the medium in m/s.
(2.6)
The understanding gained by the information presented in this section provides the
necessary background for the overview of the schlieren effect.
2.2 Schlieren Effect
2.2.1 Early Schlieren and the Schlieren Effect
The schlieren effect is a succinct name to describe refractive gradient disturbances
caused by inhomogeneous transparent media. The disturbance itself can also be referred
to as a schlieren or a schliere. These gradient disturbances cause light to be uniquely
refracted as it travels through transparent media. Refraction can occur in any single
dimension or any combination of the three dimensions. All forms of transparent media
will cause a gradient disturbance, though only a select few can be seen with the human
11 |
Virginia Tech | eye [15]. Two examples of the schlieren effect that can be observed without artificial
augmentation are heat rising from asphalt or exhaust exiting from jet engines. Although
the term is esoteric by its very nature, the schlieren effect is by no means a new concept.
This phenomenon had already been discovered, studied, and analyzed in the 17th
century by Robert Hooke [16]. Hooke is widely considered to be the father of transparent
inhomogeneous media based optical analysis [17]. His first observation of a schliere was
of a candle against a light-dark background. Hooke noted that a disturbance was being
produced by the candle‟s thermal air plume. He found that the air above the flame
seemed to be “wavering” when viewed through the disturbance. This observation
prompted him to continue his experiments into the optics of inhomogeneous transparent
media. Hooke eventually published his findings in Micrographia, a book discussing his
work with microscopy, telescopy, optical shop testing, and other optics related subjects.
In Micrographia, Hooke thoroughly discusses the subject of density variation based light
refraction in the atmosphere and in liquids [18]. This phenomenon has come to be known
as the schlieren effect.
The visual discrepancies resulting from the candle‟s thermal plume are caused by
refracting light rays as they travel though the density gradient created by the heated air.
The varying densities simultaneously produce a refractive index gradient. An observer
translates the refracted rays into a visual distortion, which is the essence of the schlieren
effect. In order to build upon his observation, Hooke continued on to develop the first
artificial schlieren observation system. His single lens imaging system was designed to
enhance the schlieren effect for more detailed observation. Hooke‟s new system replaced
the light-dark background with a convex lens [19]. Two candles were used in this new
system. One candle provided the light source and the other provided the observation
target [18]. A diagram of this system can be seen in Figure 2.5.
12 |
Virginia Tech | Figure 2.5. Robert Hooke’s single lens schlieren imaging system [18].
Hooke‟s observations were reproduced and slightly improved upon by Christiaan
Huygens about a decade later [20]. Despite Hooke and Huygens‟ novel schlieren
discoveries, this field would see little advancement due to the lack of interest in imaging
transparent inhomogeneous media in the 17th century [16]. The next few centuries,
however, would produce great advancements for imaging the schlieren effect.
The different methods for imaging a schlieren can also be referred to as schlieren
techniques. Works by Jean Paul Marat, J. B. Leon Foucault, August Toepler, and Ernst
Mach propelled schlieren optical imaging into the 20th century [16]. They adapted
Hooke‟s fundamental principles of schlieren imaging to function with new optical
technologies. Techniques for creating elaborate imaging systems composed of precision
manufactured lenses and mirrors were beginning to appear. These various studies would
combine to form the modern day schlieren techniques, which are widely used in the
aerospace industry. The most important contributions to modern day schlieren imaging
are credited to Foucault and Toepler, who will be discussed in the following section [21].
13 |
Virginia Tech | 2.2.2 Development of Schlieren Imaging
After Hooke and Huygens, Jean Paul Marat would revive schlieren imaging in the
19th century just prior to the French Revolution. He imaged thermal plumes from a
variety of flame sources. Marat is believed to have published the first schlieren
visualization image in his work on the physics of fire [22]. Later, Leon Foucault and
August Toepler would propel schlieren imaging technology forward with the
development of the knife-edge optical method. This advancement came almost 200 years
after the first observations by Hooke [16].
The knife-edge optical method describes the actual principle that this type of
imaging system uses to visualize the schlieren effect. This principle of the knife-edge
blocker can be applied using numerous combinations of optical components. These
various setups will be discussed later. Foucault had actually developed the knife-edge
schlieren method accidentally through his study of optical mirror testing. He originally
designed his system to detect imperfections in optical grade mirrors used in fields such as
astronomy. Variations of this method are still utilized today to perfect high quality
optical components. During his experiments, Foucault apparently ignored the airflows
that were being visualized by his knife-edge test [23]. Henry Draper would eventually
take notice of the phenomenon made visible by Foucault‟s system in 1864 and publish a
drawing of the thermal plume created by the human hand [24]. Despite the apparent
ability of Foucault‟s system to visualize transparent flows, he never expanded his work to
encompass this subject. Toepler would be responsible for achieving this next step.
During the Foucault experimentation phase, Toepler was concurrently expanding
on the knife-edge test. He developed a system specifically to visualize the flow of
transparent inhomogeneous media. Toepler named his new imaging method the schlieren
technique, which is the first recorded usage of the term. He is credited as being the
inventor of the schlieren imaging technique [25]. One of the most significant aspects of
Toepler‟s research was the development of the first practical apparatus for observing the
schlieren effect. His device was construed with an adjustable knife-edge cutoff, a light
source, and a telescope for detailed observation [26]. A diagram of this imaging system
can be seen in Figure 2.6 on the following page.
14 |
Virginia Tech | Knife Edge Observation Point
Figure 2.6. The schlieren imaging device designed by Toepler [26].
This system functioned on the principle of refraction. The light from the lantern
was focused to a concentrated beam. The knife-edge was then adjusted to barely block
the lantern‟s light ray. The telescope was positioned to view the area in front of the
lantern. Once a schlieren disturbance, such as a heat plume, was placed in-between the
lantern and knife-edge cutoff, portions of the beam would be refracted. As a result, the
refracted light rays bypassed the knife-edge and could be observed. Although Foucault
and Toepler had discovered the fundamental principles of schlieren imaging, the
technique was still limited by the technology of the time. The actual photographic
imaging of the schlieren effect would not be accomplished until the late 1800s by Ernst
Mach [27].
Mach incorporated the newly developed photographic and electronic circuit
technology of the time to produce physical images of the schlieren effect. He
successfully recorded images of shockwaves produced by ballistic projectiles. In these
images, the bow shock, tail shock, and turbulent wake of a bullet could be clearly
observed. In addition, Mach would expand his experiments to produce the first
photographic image of a supersonic jet in his wind tunnel [28]. As optical and
photographic technology advanced into the 20th century, so did the schlieren technique.
The 20th century most notably brought the ability to capture high speed
photographs. Hubert Schardin would improve this technology by introducing the multi-
spark camera. This camera could form up to 24 separate frames when capturing a single
photograph. Schardin combined his camera with Foucault and Toepler‟s schlieren
technique to image shockwaves from explosions, flows from shock-tubes, and impacts
15 |
Virginia Tech | from ballistics [29]. Schardin‟s work allowed for the widespread implementation of the
schlieren technique. This advancement has culminated into numerous schlieren imaging
apparatuses, various schlieren photographic methods, and diverse schlieren applications
in a multitude of fields [30]. Despite the variety of schlieren techniques available, two
major categories of the schlieren technique exist, traditional schlieren and background
oriented schlieren (BOS). The traditional schlieren technique will be discussed first.
2.3 Traditional Schlieren Technique
2.3.1 Types of Traditional Schlieren Imaging
As introduced in Section 2.2.1, traditional schlieren techniques use gradients in
the refractive index to visualize inhomogeneities in transparent media [21]. These
gradients are dependent on the material characteristics of and density variations in the
media being imaged. Under normal circumstances, the small refractions caused by
transparent flows are overwhelmed by the main light phases. Thus, these flows are
rendered invisible to the human eye. Using the traditional schlieren technique,
transparent media can be visualized by exploiting and enhancing these minute light
distortions. Schlieren media includes everything from air to xenon. As long as the
medium contains a refractive index gradient, it can be imaged. This visualization can be
achieved on a fundamental level, as explained in Section 2.1.2, because light slows when
it interacts with matter.
Air is the most common schlieren flow that is visualized to do its inherent
presence in most flow phenomena. Air is also the most common encompassing medium.
As a result, the quality of schlieren images greatly depends on the difference between the
refractive index of the airflow and the surrounding air. As the difference in the refractive
indexes increases, the ease in which the schlieren disturbance is imaged also increases.
The refractive index of air and many other gases can be represented by the following
equation where “n” is the refractive index of the gas, “k” is the Gladstone-Dale
coefficient in cubic centimeters per gram ( ⁄ ), and “ρ” is the gas density in grams
per cubic centimeter ( ⁄ ) [16].
16 |
Virginia Tech | (2.7)
The Gladstone-Dale coefficient can range from approximately 0.10 ⁄ to 1.5
⁄ in the majority of gases. This coefficient is dependent on the characteristics of the
gas as well as weakly on the frequency of light being used to image the flow [21]. As can
be seen by the mathematical relationship presented by Equation 2.7, the refractive index
is weakly affected by the material density. Thus, small gas density variations can only be
detected using a very sensitive optical system. Other characteristics that affect the
refractive index include composition, temperature, pressure, and wavelength of
illumination. The interaction of these additional elements with the refractive index are
complex and beyond the scope of this discussion. Density based refractive index changes
will be the concentration of this review [16].
Variable density flowing gases are known as compressible flow. These flows
occur mostly from temperature, pressure, and velocity differentials. As gas travels, a
distinct gradient in the refractive index is produced from the density fluctuations. The
refraction of light rays occurs in proportion to the refractive index gradient and can be
represented mathematically. The following equation displays the ray curvature produced
by a refractive gradient in an x-y plane where “z” is the normal to the plane.
(2.8)
(2.9)
The components of refraction in the x-direction and in the y-direction, represented by
“ ” and “ ” respectively, can be derived by separating the derivatives on the left hand
side of Equations 2.8 and 2.9 and then the two equations integrating once.
∫ (2.10)
∫ (2.11)
17 |
Virginia Tech | The range of the optical axis, represented by “L”, can be added to Equations 2.10 and
2.11 to characterize a two-dimensional schlieren imaging plane. The resulting equations
are as follows where “ ” is the refractive index of the medium encompassing the
schlieren object.
(2.12)
(2.13)
Equations 2.12 and 2.13 show that the gradient in the index of refraction causes
the deflection and not its overall magnitude [31]. Thus, the schlieren technique can only
applied to those areas where these gradients exist. The greater the refractive index
gradient in relation to the encompassing medium, the greater the imaging potential of the
target transparent flow. The primary method used to detect these gradients in traditional
schlieren is the knife-edge schlieren system. This system will be discussed in the
following section.
2.3.2 The Knife-Edge Schlieren System
The knife-edge schlieren technique actually represents a large number of
apparatuses and techniques that use the principle of the knife-edge obstruction to
visualize the schlieren effect. Knife-edge systems range from the very simple to the
exceptionally complex. These systems can be categorized in three general ways: lens
systems, combination systems, and mirror systems. The simplest lens system consists of
two lenses, a point light source, and a knife-edge obstruction. An example of this system
can be seen in Figure 2.7 on the following page.
18 |
Virginia Tech | Figure 2.7. Simple traditional schlieren lens system [16].
This system is setup so that the point light source and lenses are arranged inline.
The light rays travel from the light source and are focused into parallel rays by the first
lens. The parallel light rays then travel through the second lens, which re-focuses them to
a point. The knife-edge is positioned at the focal point of the rays to just block the light
from continuing further. Once a schlieren disturbance is introduced in-between the two
lenses, the parallel light rays are slightly refracted. As the refracted light passes through
the second lens, they are focused once again to a point [16]. However, since the original
trajectory of the refracted light rays had been modified by the schlieren disturbance, these
rays now avoid the obstruction and can be visualized [30].
More complex systems include multiple types of lenses, light sources, and
arrangements. Combination systems, as the name implies, utilize concave spherical or
parabolic mirrors and various lenses together in a single apparatus. As an example,
consider the Z-type schlieren imaging system. This type of schlieren arrangement
consists of two parabolic mirrors, a condenser lens, a light source, a filter object, and a
knife-edge obstruction. An example of this system with a camera installed at the
observation point can be seen in Figure 2.8 on the following page.
19 |
Virginia Tech | Figure 2.8. Z-type schlieren system [16].
The rays from the light source are concentrated onto the filter object by the
condenser lens. The filter object consists of a slit or other small opening bored through a
solid plane that allows only a set amount of light to continue from the condenser lens.
The first parabolic mirror reflects the light from the filter object into the test region. The
second parabolic mirror reflects the light from the first mirror and focuses it toward the
observation point. The knife-edge is placed so that the focused light from the second
mirror is intercepted. A schlieren object can then be placed in the test region and be
visualized at the observations point [16].
Mirror systems utilize optical grade concave spherical, concave parabolic, or
concave off-axis mirrors [16]. Optical grade flat mirrors can also be used in combination
with concave mirrors to assist in the redirection of light rays [30]. These mirrors must be
optical grade and manufactured with low tolerances [16]. Optical grade mirrors ensure
that light rays are almost perfectly reflected. Thus, a near exact reproduction of the
reflected object is produced. Conventional, mass-produced mirrors only reflect a certain
percentage of light thus diminishing the quality of the reflected image [32]. The simplest
mirror system is the single mirror schlieren system.
The single mirror schlieren system consists of a concave spherical or parabolic
mirror with a focal length of at least 1,200 millimeters (mm), a point light source, and a
knife-edge obstruction. If images of the schlieren object are desired, a professional grade
camera and a telephoto lens with a focal length of at least 200 mm can be used. Although
schlieren images can be captured with more inexpensive point-and-shoot digital cameras,
20 |
Virginia Tech | a high quality digital single lens reflex (DLSR) digital camera is preferred due to its
ability to customize aperture size, exposure amount, shutter speed, and lens
configuration. Additionally, professional DSLR cameras generally have larger image
sensors thereby allowing images to be captured within a very specific depth of field. The
final required component is a point light source. This type of light source is any
luminous object that produces light from a pinhole sized opening or a narrow slit [33].
The light source is located two focal lengths away from the mirror. The light is
directed toward the mirror and reflected toward the observation point. The knife-edge
obstruction is located two focal lengths away from the mirror and is used to intercept the
reflected light beam. The mirror produces a real image that is the exact reproduction of
the point light source at its originating location [10]. Once a schlieren object is placed in-
between the knife-edge and concave mirror, the refracted light rays will bypass the
obstruction and enter the observation area [16]. A diagram of the single mirror system
can be seen in the following figure.
Figure 2.9. Diagram of the single mirror traditional schlieren system
More complex mirror systems that are composed of multiple offset optical mirrors
used to increase imaging sensitivity, create multiple observation points, or capture
stereoscopic schlieren images also exist [30]. As can be seen by the aforementioned
discussion, a multitude of traditional schlieren techniques are available. Amongst these
21 |
Virginia Tech | systems, the single mirror system was chosen to be the primary technique utilized in this
study due to its simplicity and adequate range of sensitivity. The specific details of the
single mirror system designed for this investigation will be presented in later sections.
The applications of the traditional schlieren technique will now be discussed.
2.3.3 Applications of Traditional Schlieren Imaging
Although the traditional schlieren technique has existed for over 300 years, only
recently has it seen widespread implementation due to past technological limitations.
Traditional schlieren techniques are used to study three primary areas: phenomena in
solids, phenomena in liquids, and phenomena in gases. Phenomena in solids refer mainly
to the imaging and detection of imperfections as opposed to liquids and gases where flow
characteristics are the major elements. The schlieren imaging of solids is used in the
optical grade glass and mirror industries for quality control. Manufactures can certify
that their high-grade glass or mirror products are free of imperfections and conform to
design tolerances. The traditional schlieren analysis of liquids, unlike the analysis of
solids, is mainly concerned with flow and its interactions. Research areas include the
mixing of liquids, visualizing of boundary layers, imaging of the laminar to turbulent
flow transition, and atomizing of liquids from sprays. Traditional schlieren imaging can
also be used for more specialized imaging of phenomena such as sugar dissolving in a
moving stream or terminal ballistics analysis [16]. Although the applications for
traditional schlieren techniques are numerous in solids and liquids, the visualization of
gas flow leads this field of study.
The applications of traditional schlieren for imaging gas flow are numerous and
stretch across many disciplines. The most prevalent use of this technique is perhaps by
the aerospace industry. Engineers in this field have used traditional schlieren to gain a
better understanding of supersonic flows. Various studies have been able to quantify the
density fluctuations caused by supersonic turbulent jets [34] and to visualize the
shockwaves created by hypersonic flight [35]. In addition, optical tomography
techniques have been combined with traditional schlieren to quantify density fields of
subsonic flow. Studies have analyzed the compression waves that flow from helicopter
22 |
Virginia Tech | rotor blade tips [36]. Although the application of traditional schlieren techniques is
prevalent in the aerospace industry, this imaging tool has recently expanded to other
fields.
Outside of aerospace, traditional schlieren has been used to quantify temperature
fields in three-dimensional gas flows [37] as well as density and velocity fields in
cryogenic gas flows [38]. Traditional schlieren systems have imaged gas leaks from
chemical pipelines [39], ventilation flow in living areas, ventilation flow in kitchens [40],
and shock waves from a trumpet being played [41]. Fields such as ballistics and
explosives have even adopted traditional schlieren to assist in the analysis of bullet travel
[42] and shockwave propagation from confined explosions [43]. As can be seen by the
previously introduced applications of the traditional schlieren technique, this imaging
method can be diversely applied. However, this technique is still limited by some
constraints.
2.3.4 Limitations of the Traditional Schlieren Technique
The primary drawback of the traditional schlieren technique is its ability to be
applied conveniently on a large scale. Although some advances have been made by
Ralph Burton from the University of Arkansas [44], Horst Herbrich from Industriefilm
[45], Leonard Weinstein from NASA [46], and Gary Settles from Pennsylvania State
University [47,48] in the area of large scale traditional schlieren implementation, the
current form of the technique remains impractical. This limitation stems from two areas:
equipment and sensitivity. The traditional schlieren method requires the use of optical
grade lenses and/or concave mirrors. The nature of manufacturing this grade of optical
equipment is costly and time consuming. Additionally, lenses and mirrors of this caliber
are very sensitive to environmental influences, such as dust, temperature shifts, and
humidity. As a result, traditional schlieren equipment cannot be practically implemented
when conducting large scale field testing due to high cost and inadequate flexibility [30].
The sensitivity issue arises from the level of precision needed to visualize certain types of
flow.
23 |
Virginia Tech | A traditional schlieren system can easily be set up to visualize flows that have a
high density contrast to the surrounding medium. For example, the movement of heated
gas, such as air, though a medium of air at atmospheric conditions can be captured with
limited alignment precision of the optical devices. The required level of precision greatly
increases as the refractive index of the target flow approaches the refractive index of the
surrounding medium. Such exactness is needed because of the decreasing difference
between the angle of incidence and the angle of refraction. This relationship is
demonstrated by Equation 2.3 in Section 2.1.2. The required alignment for the knife-
edge obstruction can require precision in the micron range for certain flow scenarios.
Furthermore, equipment manufacturing tolerances become much stricter and
environmental influences, such as vibrations and extraneous transparent flows, become a
significant concern. Although such exacting specifications have been achieved in
numerous experiments, large scale field implementation is currently impractical. The
recent development of background oriented schlieren (BOS) may provide a solution to
the limitations found in the traditional schlieren technique.
2.4 Background Oriented Schlieren (BOS)
2.4.1 Introduction of the Background Oriented Schlieren (BOS) Method
The confining nature of the traditional schlieren technique has limited the
majority of its application to controlled laboratory environments. This problem has
prompted the development of background oriented schlieren (BOS). The principles of
the BOS flow visualization method, which is also referred to as synthetic schlieren [48],
was first introduce by G.E.A. Meier in 1999 [4]. The BOS technique allows for the large
scale visualization of the schlieren effect while eliminating the need for complex
equipment [47]. This method continues to use the relationship between refractive index
and density variations to image inhomogeneous transparent media. However, lenses,
mirrors, and precision backdrops are replaced by artificially or naturally generated light-
dark backgrounds combined with a digital camera.
24 |
Virginia Tech | The light-dark background can be composed of any pattern that has a high spatial
frequency and can be imaged with a high contrast. Artificial backgrounds are usually
composed of small, randomly distributed dots, black and white stripes, or other such
patterns [47]. Natural backdrops consisting of trees, leaves, and grass are also suitable as
long as the pattern conforms to the previously introduced constraints [49]. However, the
image sensitivity when using natural background based images is reduced. The BOS
background enhances the light distortions for the camera by serving as a reference plane
for the image sensor and processor. As light reflects off the background, the rays are
refracted through the inhomogeneous transparent flow [47]. The resulting distortion in
the pattern caused by the refracting light rays is captured by the camera. Computerized
image processing software is then used to enhance the distortion and visualize the
schlieren effect [50]. This process will now be discussed.
The final schlieren image is produced by first taking a reference picture. This
picture captures the imaging area when no flow is present (i.e. static conditions).
Another photo is then captured of the imaging area where the target flow is present. The
reference image and the flow image are then processed to enhance the pixel differences
between both photos. Alternatively, images can also be produced by comparing two high
speed photos (i.e. images taken at greater than 5 frames per second) that are captured
consecutively in the same manner [47]. The simplicity and flexibility of BOS has
expanded the scope for visualizing inhomogeneous transparent media. The various
applications of BOS are discussed in the following section.
2.4.2 Applications of Background Oriented Schlieren
Despite the recent nature of this technique, BOS has already been applied in
multiple experiments with varying scales. Several large field studies have been
successfully conducted by Michael Hargather and Gary Settles to image heat rising from
a propane torch, heat rising from a car, and shockwaves from fired rifle with the use of
natural backgrounds [49]. More advanced BOS investigations have been conducted to
image whole-field density distributions in two-dimensional stratified flow [51], tip vortex
formation from helicopter blade tips [52], shockwaves from supersonic phenomena [50],
25 |
Virginia Tech | and flow visualization in hypersonic impulse facilities [53]. Quantitative studies have
also been conducted to measure displacement fields and density distributions in flowing
media [47]. The aforementioned examples demonstrate the wide range of BOS
applications, which are outlined in Section 2.5. However, this technique is still subject to
some constraints that will be discussed in the following section.
2.4.3 Limitation of Background Oriented Schlieren
Various experiments have been completed by S. B. Dalziel in whole field density
measurements [51] and Erik Goldhahn in three-dimensional density fields to evaluate the
sensitivity of the BOS method [54]. These studies concluded that BOS has a comparable
sensitivity threshold with traditional schlieren if certain experimental conditions are met.
These conditions include the proper matching of setup geometry, camera resolution,
camera lens type, background resolution, background contrast [54], and digital evaluation
algorithm [55]. In BOS setups that use mass manufactured digital photographic cameras,
imaging settings (e.g. ISO, aperture size, shutter speed, and exposure) must be optimized
for each specific flow. This optimization ensures that the desired effect is captured with
maximum clarity. However, once the aforementioned parameters are analyzed,
optimized, and implemented, the flexibility of the customized BOS imaging system is
greatly reduced.
Additionally, the complexity of implementing the BOS system and analyzing the
results greatly increases. This consequence is especially prevalent if multiple imaging
dimensions or quantitative analyses are desired. Multi-dimensional flow
characterizations are achieved through the simultaneous imaging of the desired
perspectives. Quantitation of BOS images requires a controlled experimental
environment in conjunction with the implementation of particle image velocimetry (PIV)
algorithms to analyze the images [47]. If the need for flexibility becomes significant,
then sensitivity and quantitation must be sacrificed. This compromise between
sensitivity, flexibility, and quantitation is the primary limitation of BOS. Despite this
problem, variations of the BOS technique have been successfully applied in numerous
studies. An outline of the primary BOS techniques currently in use is presented in the
following section.
26 |
Virginia Tech | 2.5 Previous Background Oriented Schlieren Research
The system introduced in Section 2.4.1 that utilizes a digital camera and a
background is the most popular form of background oriented schlieren (BOS). This
version of BOS can be referred to as the single camera technique. The photographic
elements as well as the background compositions are highly customizable and can be
applied to a variety of situations. The single camera system has been used to analyze
such subjects as density fields [48] and helicopter shed vortices [47]. However, this
system is still limited in scope without modifying the basic design. Several different
versions of the original system have been developed to combat this problem and expand
the scale of BOS. The other forms of BOS are discussed in the following sections.
2.5.1 Stereoscopic Background Oriented Schlieren
The background oriented stereoscopic schlieren method (BOSS) extends the
single camera schlieren system by adding another imaging perspective. Two cameras are
synchronized to capture the target schlieren disturbance either simultaneously or
consecutively with a diminutive delay. The BOSS method records two image pairs from
different viewing angles in order to provide multi-dimensional imaging capabilities (i.e.
provide depth of field to BOS images). The implementation of BOSS allows for the
positions of phenomena, such as vortices and eddies, to be identified in flow fields [50].
This type of schlieren system has already been used to study combustion chamber flow
fields [50] and compressible blade tip vortices from rotary wings [52].
2.5.2 Tomographic Background Oriented Schlieren
The background oriented optical tomography (BOOT) schlieren method is a
newly investigated form of BOS. Tomography, in general, is an analysis technique that
produces three-dimensional, virtual reconstructions of the internal structure and
composition of objects. This reconstruction is created from the observation, recording,
and examination of passage of energy waves or radiation through a target object [56]. In
27 |
Virginia Tech | the BOOT method, numerous imaging channels are implemented to allow for a complete
rendering of a schlieren disturbance using light deflection. Similar to other tomographic
techniques, BOOT utilizes radon transform algorithms to create the final images. The
algorithms must be moderately customized with BOS specific parameters to reconstruct
the flow [50]. Only a limited number of studies, such as the estimation of flow field
density distributions by measuring light ray deflection, have been completed thus far with
BOOT [52].
2.5.3 Large Scale Background Oriented Schlieren
The BOS method has primarily been implemented in laboratory settings.
However, the basic principles of this technique can potentially be applied on a large scale
to image transparent phenomena in the field. The main difficulty in transforming BOS to
be used in this manner is identifying a suitable background. According to the
background specifications introduced in Section 2.4.1, certain natural backgrounds, such
as grass and trees, can be used in BOS. However, even suitable natural backgrounds are
further limited by the criteria of fine scale, randomness, and contrast that must be met.
Thus, some backgrounds are more advantageous than others depending on the type of
schlieren effect being imaged. Preliminary BOS studies using natural backgrounds have
been conducted by Michael Hargather and Gary Settles to image heat plumes from a
torch, thermal plumes from an automobile engine, shockwaves from a fired rifle, and
shockwave propagation from an explosion [49].
2.5.4 Background Oriented Schlieren with Particle Image Velocimetry
The BOS technique is primarily a qualitative technique that provides visual
information about inhomogeneous transparent flows. However, quantitative data can also
be gathered using particle image velocimetry (PIV) analysis techniques. PIV is an image
analysis method that evaluates the motion of small seeded particles from two
consecutively captured frames. PIV analysis can also be applied to a single frame
capture with two exposures [50]. In traditional PIV, small, light, reflective particles, or
28 |
Virginia Tech | tracer particles, are added to a flow. These particles are illuminated consecutively at least
twice by a synchronized strobe light and imaged concurrently. The consecutive images
are then interrogated for particle displacements, which ultimately produces a velocity
profile for the flow [57].
Interrogation is a numerical analysis process that tracks the movement of the
particles. The analysis is completed by first dividing the image into a numerous sections.
Each section, or window, is processed individually and then combined once the
interrogations are complete. The movements of the particles are tracked by applying
statistical cross-correlation and autocorrelation algorithms based on mathematical
analysis operations such as the Fourier transform [57]. The main difficulty of using PIV
is seeding (i.e. physically introducing) the flow field with suitable particles. This process
is not necessary in BOS.
PIV analysis algorithms can be used with BOS images due to the self-seeding
nature of turbulent schlieren disturbances. If an imaged flow is sufficiently turbulent, the
eddies that appear in the image can serve as virtual PIV particles. The PIV processing
algorithm can then track the detail shifts in the turbulent eddies to generate velocity
profiles. PIV can also be used to produce density profiles for the flow. Density
distribution information is gathered using a refraction analysis algorithm tailored to the
particular BOS setup being used. Once a schlieren disturbance is introduced to the
imaging area, the background pattern is slightly distorted due to light ray refractions
caused by the flow [58]. An example of this distortion can be seen when any solid object
is placed in a cup of water. The object appears to shift positions instantaneously as it
enters the water. The refraction algorithm measures the amount of deflection that occurs
as a result of the flow. This deflection can then be correlated to the density needed to
produce that amount of distortion in the static image. An example of a simple
quantitative BOS setup can be seen in Figure 2.10.
29 |
Virginia Tech | Figure 2.10. Example quantitative BOS setup [4].
Figure 2.10 displays a system in which a perfectly cylindrical flow is traveling in
the x-direction perpendicular to the y-axis and the z-axis. In this configuration, the
refraction caused by the flow only occurs in the “y”, or vertical, direction. The z-
direction is the line-of-sight direction, which can also be referred to as either the axial
imaging path or the optical path. Thus the x-axis is located along the free-stream
direction, which also serves as the axis of symmetry. The deflection of the image,
represented by “ ”, is defined by the following equation where “ ” is the refractive
index of the encompassing medium and “ ” is the refractive index of the schlieren
medium [59].
∫ (2.14)
Equation 2.14 assumes that the half-width of the density gradient region is “ ”
where “ ” is much smaller than “ ”. The creation of this numerical representation of
the BOS imaging system allows a cross-correlation algorithm to be applied in
30 |
Virginia Tech | conjunction with the Gladstone-Dale equation of this flow. This equation is displayed
and discussed in detail in Section 2.3.1. Traditional PIV algorithms, which are slightly
modified according to the design of the BOS system, can then be employed. The
algorithms numerically interrogate the image to identify background deflections and thus
produce a density gradient [59].
The aforementioned quantitative PIV processing technique has already been
successfully applied to BOS studies of supersonic flows in shock tunnels [60], transonic
turbine blades [61], and wing tip vortices in a transonic wind tunnel [62]. However, the
majority of PIV based BOS studies, including the three that were previously introduced,
have not produced actual quantitated density gradients or velocity profiles. Instead, these
PIV studies have only produced pixel displacement profiles and gradients without
correlating them to actual values. Some examples of PIV outputs in BOS investigations
can be seen in Figures 2.11 to 2.13.
Figure 2.11. Vector displacement of pixels from BOS images of
supersonic flows in shock tunnels as evaluated by PIV [60].
31 |
Virginia Tech | The lack of quantitative BOS based PIV analyses allude to the difficulty of implementing
PIV.
As previously introduced, some limitations and difficulties exist in the
implementation of PIV in BOS. Charged coupled device (CCD) based cameras are
required to achieve the synchronization and speed necessary for PIV images. CCD
cameras are essential due to their ability to not only capture an image but also store data
regarding the light intensity of each pixel [63]. The resolution of the background must
also be exactly matched to the resolution of the camera. This paring is needed to
accurately determine the deflection of the pattern caused by refraction of light as the
beam travels through the schlieren disturbance.
Refraction analysis algorithms tailored to the specific BOS setup are needed. As
a result, the imaging area must be constructed according to exacting specifications. Care
must also be taken during the experimentation process to maintain the integrity of the
encompassing medium. Any introduction of extraneous gases or drastic shifts in
environmental conditions will interfere with the BOS images. If high accuracy is desired,
the density profile of the target flow must also be incorporated into the PIV processing
algorithm. However, the exact profile is usually unknown and therefore requires a
simplification of the algorithm [54]. Commercially available PIV processing software
can be used with limited modifications, but accuracy is affected. Currently, laminar
flows cannot be analyzed with PIV due to the lack of particle seeding potential from
turbulent structures. Although promising, the PIV analysis of BOS images is complex,
restrictive, and inflexible at this early stage [58]. Due to the sheer difficulty of PIV
development and implementation in BOS, this technique was not considered to be a
viable method for this study.
33 |
Virginia Tech | 2.6 BOS Applications in Underground Mine Ventilation
The BOS technique has never been directly applied to gather qualitative data
about underground mine ventilation systems. The only exception is a study conducted by
H. Phillips using a color schlieren system designed to make instantaneous measurements
of the methane layering in a stratified methane-air mixture [64]. However, this study is
limited to laboratory scale and only peripherally related to mining. Although no direct
research has thus far been completed in the subject of BOS and underground mine
ventilation, BOS studies by Michael Hargather and Gary Settles demonstrated that a rock
face may provide an appropriate background for BOS imaging [49]. Due to this lack of
research, the possible applications of BOS in underground mine ventilation can only be
conjectured.
The acquisition of BOS data could be used to optimize the procedures of
ventilation surveys and design of ventilation monitoring equipment. For example,
images of methane flow in active mining areas can be used to optimize the positioning of
auxiliary ventilation equipment to more effectively dilute areas of high methane
concentration. Methane monitoring procedures (i.e. where and how to monitor methane)
could be improved with the identification of methane emission characteristics and
accumulation regions. BOS images could also be used to re-evaluate the placement of
methane monitors on mining equipment to better facilitate the detection of dangerous
methane concentrations in active mining areas. For these reasons, the following study
was designed to ascertain the feasibility of applying BOS in imaging underground mine
ventilation systems.
34 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.