University
stringclasses
19 values
Text
stringlengths
458
20.7k
Virginia Tech
boiler/generator design, instrumentation and control, and high voltage distribution systems. On the other hand, the ever increasing emphasis on electrification of rural and urban slum areas with diminishing government subsidies makes it difficult to justify the additional costs of installing traditional coal cleaning plants in India. In India, electricity is generated by state-owned companies, and the costs of power generation at the plants operated by State Electricity Boards (SEB) are 2 to 4 times higher than those of the best Indian practice, e.g., at National Thermal Power Corporation (NTPC), also a state-owned company. Thus, in India, building conventional coal cleaning plants is a synonym of additional costs to the already high costs of electricity generation. Thus, there is a dire need to develop (or introduce) non-conventional and low-cost coal cleaning technologies for the Indian coal and power industry. Dry coal processing can be an effective techno-economical tool for the Indian coal industry. Large quantities of rock are being extracted in order to recover the coal, reportedly resulting in 60-70% of the raw material being rejected as waste. The haulage, processing, and combustion of rock represent a significant waste of energy and have a negative environmental impact (Biswal et al., 2002). The process of removing unwanted rock from ROM coal is referred to as deshaling, which normally involves a high-density separation in a gravity-based process. In contrast to coal cleaning in traditional preparation plants, the separation density in dry deshaling is higher, with a typical target of 2.0 relative density or greater (Honaker et al., 2008). Normally, the most cost-effective approach is to place the deshaling unit as close to the extraction face as possible to reduce transportation and maintenance costs. The dry process also eliminates the use of water for beneficiation. In light of the potential advantages of dry coal beneficiation, an engineering development project was undertaken to evaluate the potential of dry coal deshaling technology 4
Virginia Tech
for industrial sites in India. The project was selected by the Asia-Pacific Partnership on Clean Development and Climate (Coal Mining Task Force) under the sponsorship of the U.S. Department of State. Previous work conducted in the U.S. had indicated that the technology could, in particular, efficiently reject undesirable high-ash rock from run-of-mine (ROM) coals in both eastern and western coalfields located in the United States. For the case of Indian coals, this previous work suggested that this low-cost approach to coal quality improvement could: (cid:120) Produce lower ash products that can be burned more cleanly and with greater efficiencies. (cid:120) Reduce the amounts of fly ash emission and associated hazardous air pollutant precursors. (cid:120) Minimize capital, operating and maintenance costs associated with coal beneficiation. (cid:120) Reduce the shipping costs and help Indian railway system to ship high calorific value coal to power plants. (cid:120) Avoid potential environmental issues concerning slurry impoundments. (cid:120) Accommodate coal beneficiation plants in areas where water is not readily available. Recent studies conducted by Bhattacharya and Maitra (2007) indicate that the dry cleaning of coal in India holds tremendous promise, provided that the total beneficiation costs could be kept below US$1.00-1.50 per ton. Recent studies conducted in the U.S. suggest that the cost of implementing such technology is well below this limit (Honaker and Luttrell, 2007). Increasing the availability of higher-quality coals via removing relatively pure rock from power station feed stocks will help India to increase the net efficiency of coal use, and facilitate implementation of state-of-the-art clean coal technology processes to substantially reduce CO 2 emissions. More importantly, these environmental gains can be realized without financial 5
Virginia Tech
expenditures, since the application of the deshaling technology can be justified purely on cost benefits resulting from improved boiler efficiencies, reduced coal transportation, and ash handling/disposal requirements. 1.2 PROJECT OBJECTIVE The primary goal of this engineering project was to develop an effective dry coal deshaling system that can be commercially applied in India for the upgrading of run-of-mine (ROM) coals. To be successful, the system had to be capable of (i) separating rock using a relatively high specific gravity cut-point of around 2.0 RD, (ii) minimizing the amount of valuable carbonaceous material lost with the rejects stream, (iii) operating with lower capital and operating costs than traditional coal cleaning processes,(iv) serving as a mobile cleaning station that can move with the mining operations, and (v) functioning without water to avoid environmental concerns of fine coal slurries and eliminate adverse effects of added moisture on coal transportation and heat value. A promising technology that meets these requirements is the dry deshaling process offered by Eriez Manufacturing for upgrading run-of-mine feed coals. The technology has already found commercial success in the Chinese coal industry (Lu et al., 2003) and in the United States (Honaker and Luttrell, 2007). However, considerable R&D is required prior to implementing this technology for upgrading distinctly different Indian coals, which typically have a greater proportion of middlings and, hence, are more difficult to upgrade. In particular, field testing is required (i) to establish the suitability of the technology for deshaling coals with difficult washabilities, (ii) to define the operational capabilities of the technology in terms of coal recovery and quality for typical Indian coals, and (iii) to determine the economic viability of this approach for the coal markets that currently exist in India. 6
Virginia Tech
With the first global energy crisis in the early 1970's, the Government of India made a decision to nationalize private coal mines. In the first phase only coking coal mines were nationalized and, in the second phase in 1973, the non coking coal mines were nationalized. The Coal Mines Authority Ltd. ("CMAL") was set up in 1973 to operate the nationalized non-coking coal mines. In September 1975, the nationalized coal industry was restructured with the establishment of Coal India Limited (CIL). CIL now has eight subsidiary companies. At present, with its monopolistic position, CIL accounts for 85% of coal production, followed by Singareni Coal Company Limited (8.5%), and other captive producers (6.5%). 2.2.1 Coal Resources India's major workable coal deposits occur in two distinct stratigraphic horizons, i.e., Permian, commonly known as Gondwana coals, and the Tertiary. About 99% of the country's coal resources are found within a great succession of fresh water sediments. These resources are available in sedimentary rocks of older Gondwana formations of peninsular India and younger Tertiary formations of northeastern/northern hilly region of the Himalayas (Ministry of Coal, 2009-10). The coal resources in India, subdivided by formation and category, as of January 4, 2007, are summarized in Table 2.1. The total coal reserves of the country have been estimated from time to time. The proved reserves are estimated from the dimensions of outcrops, trenches, mine workings and boreholes and a reasonable extension, not exceeding 200 m on geological evidence. Where little or no exploratory work has been done, and the outcrop exceeds one km in length, another line drawn roughly 200 m from outcrop will define a block of coal that may be regarded as proved on the basis of geological evidence. 10
Virginia Tech
Table 2.1 Coal resources in India (Million metric tons). Formation Proved Indicated Inferred Total Gondwana Coal 105, 343 123,380 37,414 266,137 Tertiary Coal 477 90 506 1,073 Total 105,820 123,470 37,920 267,210 Table 2.2 Distribution of coal by rank in India (Million metric tons). Type of Coal Proved Indicated Inferred Total Primary Coking 4,614 699 --- 5,313 Medium Coking 12,449 12,064 1,880 26,393 Semi-Coking 482 1,003 222 1,707 Sub-total coking 17,545 13,766 2,102 33,413 Non coking 87,798 109,614 35,312 232,724 Tertiary 477 90 506 1,073 Total 105,820 123,470 37,920 267,210 In the case of indicated reserves, the points of observation are 1,000 m apart, but may be 2,000 m for beds of known geological continuity. Thus, a line drawn 1,000 to 2,000 m from an outcrop will demarcate the block of coal to be regarded as indicated. Finally, the inferred reserves refer to coal for which quantitative estimates are based largely on broad knowledge of the geological character of the bed, but for which there are no direct measurements. The estimates are based on an assumed continuity for which there is geological evidence, and more than 1,000 to 2,000 m from the outcrop. Coals of practically all ranks occur in India except peat and anthracite. The share of lignite, however, is insignificant as compared to sub-bituminous and bituminous coal. Indian 11
Virginia Tech
values (between 2500(cid:177)5000 kcal/kg) (IEA, 2002). Coals from the U.S. and China have about twice the calorific value and carbon content of Indian coals. The low calorific value implies more coal usage to deliver the same amount of electricity. Indian coal, however, has lower sulfur content in comparison to other coals, although it has relatively high amounts of toxic trace elements, especially mercury (Masto et al., 2007). Ash is generally well intermixed into the coal structure and hence coal washing using physical methods is difficult, although it might be necessary for industrial use. The high ash content also leads to technical difficulties for utilizing the coal, as well as lower efficiency and higher costs for power plants. Some specific problems with high ash content include high ash disposal requirements, corrosion of boiler walls, fouling of economizers, and high fly ash emissions (IEA, 2002). The high silica and alumina content in Indian coal ash is another problem, as it increases ash resistivity, which reduces the collection efficiency of electrostatic precipitators and increases emissions. The ash content in Indian coals has been increasing over the past three decades, primarily because of increased surface (opencast) mining and production from inherently inferior grades of coal (Ministry of Coal, 2006b). Current practices have limited coal resource assessments to within 300m, which implies that opencast mining is expected to dominate production over the next 20 to 30 years; thus, coal quality might not improve much without additional washing and beneficiation. Furthermore, the current grading system of coals in India does not provide a proper pricing signal for coal producers to improve coal quality. Nevertheless, there is already some washing of power plant-grade coal in India as power plants aim to meet the environmental regulations on coal-ash content. As stated earlier, the regulations require that power plants must 14
Virginia Tech
The cost of transportation is also a significant part of the final cost of delivered coal to consumers. For example, the cost of coal for power plants in 2005 was estimated to be under US $20/ton (US $5/million kilocalories), including royalty and tax; however, the cost of delivered coal is about US $48 to US $64 per ton, as freight and handling add about US $28 to US $44/ton, depending on distance and mode of transport. Important modes of transport of coal in CIL are railways (56%), road (17%), merry-go-round systems (23%), conveyor belts and the multi modal rail-cum-sea route (CIL, 2008-09). 2.2.3 Coal Grading System India uses a coal grading system to designate the quality of different mined and washed coals. The gradation of non-coking coal is based on Useful Heat Value (UHV). In the case of coking coal the gradation is based on ash content, and for semi coking / weakly coking coal it is based on ash plus moisture content. In the early sixties, when the high ash content in the domestic coal was considered a deterrent to its gainful utilization particularly in power station boilers, the concept of UHV was introduced. The purpose behind this concept was to promote and popularize the use of coal with high ash content in power plant boilers. UHV differs from the Gross Calorific Value (GCV) by a factor that can be termed as (cid:179)ash penalty(cid:180). Refer to Table 2.3. For pricing purposes, the thermal coals are graded from 'A' to 'G' based on their ash and moisture content. The government is considering switching over to a gross calorific value based on the grading system of thermal coals because in the present scenario, over 80% of the coal produced is mined from mechanized open pit mines, multiplicity of supply sources, and power plants that are receiving coals of high ash content with inconsistent quality and size distribution causing operational problems. 16
Virginia Tech
Godavari Coalfields The Godavari coalfields are located in Andhra Pradesh. The major player in this region is Singareni Collieries Company, Ltd., a government undertaking. The total production was 37.71 million metric tons in 2006-07, while the total estimated reserve was 9.16 billion metric tons as of March 2009. Coal mined in this region is entirely non-coking and available in grades from B to G. Wardha Valley and Kamptee Coalfields The Wardha Valley and Kamptee coalfields are located in Maharashtra, mainly in the Nagpur and Wardha regions. Western Coalfields (CIL) annual production in the region was 43.51 million metric tons in 2007-08. Coals from this basin are non-coking coal with grade B to F. Pench Kanhan Tawa valley/Pathkhera Coalfields The Pench Kanhan Tawa Valley/Pathkhera coalfields are located in the central part of Madhya Pradesh. This area is also part of Western Coalfields, Ltd. The region provides non- coking coal with most grades varying from E and F. Talcher and IB Valley Coalfields The Talcher and IB Valley coalfields are located in Orissa and some parts of Chhattisgarh. Mahanadi Coalfields, Ltd. (CIL) produced 88.0 million metric tons of coal from this basin in 2007-08. Other major producers in this region include Jindal Power and Steel, Ltd., Sainik-Aryan Group, and Global Coal Mining, Ltd. The coals generated from this region are non-coking with available grades from B to F. 19
Virginia Tech
Singrauli Coalfields The Singrauli coalfields are located in the eastern part of Madhya Pradesh and some regions in Uttar Pradesh. The basin is one of the major coal producing regions in India. The Northern Coalfields (a subsidiary of CIL) is the major producer in the region, with an annual production of 59.6 million metric tons in 2007-08. The coals from this region are largely non- coking with grades varying from D to G. Korba-Raigarh, Sohagpur and Sonhat-Bshrampur Coalfields The Korba-Raigarh, Sohagpur and Sonhat-Bshrampur coalfields are located primarily in Chhattisgarh (87%) and some parts of Madhya Pradesh. South Eastern Coalfields, Ltd. (CIL) produced 93.8 million metric tons of coal from this region in 2007-08. Most of the coal from this region is non-coking with grades varying from A to E. A trace amount of semi-coking coal (approximately 0.16 million metric tons) is also present in this region with a grade of SCG-1. Karanpura and Bokaro Coalfields The Karanpura and Bakaro coalfields are mostly located in the region of northern Jharkhand, which is one of the major coal producing regions in India. Central Coalfields, Ltd. (CIL) production in this region was 44.2 million metric tons in 2007-08. The region has both non-coking coal as well as medium-coking coal (around 53% of total production). The rest of the production from this basin is high-grade non-coking coal. Tata Steel is the major coal producer in the Bokaro region with about 4 million metric tons of washed annual coal production that is used for making coke. Raniganj and Rajmahal Groups Coalfields The Raniganj and Rajmahal groups of coalfields are located in West Bengal. The region contains primarily high-grade non-coking coal as well as minor amounts (a few percent) of semi- 20
Virginia Tech
coking coal. Eastern Coalfields, Ltd. (CIL) is the major producer in this region with an annual production of 24.1 million metric tons. Jharia Coalfields The Jharia coalfields are located in Dhanbad (Jharia) in Jharkhand. This coalfield is the only source of prime coking coal in India. As such, production from the region includes mostly prime- coking and medium-coking coals. This region accounts for 45% of the total coking coal produced in India. Bharat Coking Coal, Ltd. (CIL) produced 25.2 million metric tons of coal from this region in 2007-08. Tata Steel (TISCO) is the other major coking coal producer in this region. Lignite Resources India has significant resources of soft coal (brown coal) as lignite. The total lignite resource is estimated at about 3,500 million metric tons, of which about 2,800 million metric tons are located in Tamilnadu. The remaining lignite deposits are found in Gujrat, Jammu and Kashmir, Pondicherry and Rajasthan. The production in 2006-07 was approximately 31.3 million tons. Major companies working in the lignite deposits include Nevyeli Lignite Corporation and Gujrat Mineral Development Corporation. 2.2.5 Coal Production Coal mining in India is dominated by opencast extraction and has grown at a 4% average annual rate over the past decade. Through sustained investment programs and greater thrust on application of modern technologies, it has been possible to raise the production of coal from a level of about 70 million metric tons at the time of nationalization of coal mines in the early 1970's to 512.3 million metric tons in 2009-10. Coal India Limited and its subsidiaries are the major producers of coal. 437.4 million metric tons of coal were produced by Coal India Limited 21
Virginia Tech
(cid:68)(cid:69)(cid:82)(cid:88)(cid:87)(cid:3) (cid:27)(cid:22)(cid:8)(cid:3)(cid:82)(cid:73)(cid:3) (cid:44)(cid:81)(cid:71)(cid:76)(cid:68)(cid:182)s coal comes from opencast mines, some of them being large, highly mechanized opencast operations (Ministry of Coal, 2009-10). The surface operations require less labor, can be implemented faster and involve lower production costs than underground mines. Productivity in opencast mines is generally much higher than that of underground mines. The cost of production per metric ton of coal from underground mines could be as high as five times that of opencast mines. Since the coal is priced according to grade and not on a cost plus basis, some of the coal produced from underground mines is sold below their cost of production. (TERI, 2006). 2.3 COAL BENEFICIATION IN INDIA Beneficiation of thermal coal is a relatively new development in India. Table 2.4 provides a summary of the installed capacities of current washeries. Much of the new cleaning capacity was installed in response to regulations promulgated in 2001 by the Ministry of Environment and Forest (MEF) as mentioned previously. The Government of India has also established build-own-operate-manage (BOOM) policies to encourage deployment of coal cleaning (beneficiation) technologies by domestic and foreign companies. On the other hand, power plants located near mine sites are still allowed to burn run-of-the-mine (ROM) high ash content raw coals. Even though some coals are being cleaned, the extent of beneficiation is often minimal. Typically, a ROM coal is screened dry to set aside the fines fraction, and only the coarse fraction is cleaned. The raw coal fines are then combined with the cleaned coarse coal to meet the 34% ash requirement. In some cases, the coarse coal cleaning is limited to deshaling by hand picking, which is labor intensive and highly inefficient. 24
Virginia Tech
Table 2.4 Installed capacities of washeries operating in India. Washery Non-Coking Coal Coking Coal Total Operators Numbers Capacity Numbers Capacity Numbers Capacity (MTA) (MTA) (MTA) CIL 7 20.2 11 19.68 18 39.88 Non - CIL 27 69.6 7 11.27 34 80.87 Total 34 89.8 18 30.95 52 120.75 When used, washing plants are typically preceded by single or two-stage crushing to reduce the raw coal to a top size of 100, 75 or 50 mm. The smaller fraction of raw coal (-13, -10 or -6.5 mm) that typically contains low ash (20-30%) is usually not washed. The specific size selected for washing or direct consumption would depend upon the ash content and effectiveness of screening. The coarser fraction is washed by jig, heavy medium bath or heavy medium cyclone to the extent that the combined ash of washed coarse coal and unwashed small and fine coal is within the stipulated limit. In some of the plants, inefficient barrel washers and spirals are used for small (<10 mm) and fine (<3 mm) coal, respectively, in which case the fraction finer than about 0.5 mm would normally be discarded In 2007, the approximate capacity of beneficiation of thermal coal was 70 million metric tons per annum in (Singh, 2007). For the year 2005-06, India produced 380 million metric tons of thermal coal, of which only 17 million metric tons were beneficiated coals delivered to 12 power stations. The rest were untreated ROM coals. Thus, less than 5% of the coals burned for electricity generation were beneficiated coals. Also, assuming 80% yield as an average, the 17 million metric tons of clean coal should represent 22 million metric tons of feed coal. Thus, the 25
Virginia Tech
beneficiation plants in India were operating at approximately 44% of the design capacity, despite the fact that beneficiation offers a number of economic and environmental benefits. Power plants in India have been slow to utilize washed coal because of several reasons including (i) the perception that traditional coal washing adds to the already high cost of supplied coal, (ii) the lack of stringent emission standards (particularly for fly ash) at power stations, and (iii) the absence of a pricing structure that prorates thermal coal value based on variations in gross calorific value (which is standard international practice). Coal India Limited is going to set up twenty coal washeries (coking and non-coking) on a Build-Own-Operate & Maintain basis with a total capacity of 111 million metric tons (Business Standard, 2011). These washeries are expected to be commissioned by 2015. Nevertheless, the establishment of the build-own-operate policy by the Indian government is expected to encourage more rapid deployment of coal beneficiation practices in India. 2.3.1 Existing Coal Beneficiation Technologies The current total installed capacity for washing non-coking coal is around 89.8 million metric tons, (see Table 2.4) whereas the requirement by the end of 11th Five Year Development Plan is 243 million metric tons. Total production of both coking and non-coking coals was 120.75 million metric tons. Government owned CIL represents approximately 40 million tons of washing capacity, while about twice this amount is available for non-CIL facilities. It is projected that the amount of thermal to be washed by 2025 would be 361 million metric tons. This leaves a huge gap of 271.2 million metric tons of washed coal, posing a need of a large number of high capacity washeries. Some of the various coal preparation techniques used in India are mentioned below. 26
Virginia Tech
operation and power cost. The process consists of beneficiation of raw coal with top size 50 mm in a long cylindrical barrel with self generated slurry as the separating dense medium (CMPDI, 2005-06). After initial processing, the barrel float product is crushed and beneficiated in a second stage of cyclones. Fine Coal Washing The heart of fine coal circuits is made of dense medium cyclones or under-pulsated jigs. The degree of beneficiation is defined by the raw coal characteristics and product requirements. Washing of smaller size fraction, say less than 3-6 mm, is not common for coal fed to thermal coal as the desired ash level in overall washed product is achieved by coarse coal washing. These products are often used for feeding to sponge iron and cement industry. When washed, non- coking coals of smaller size fraction are typically processed in Batac jigs and dense medium cyclone washeries. Even coking coal washeries utilize Batac jigs and dense medium processes for upgrading coking coals. 2.4 DRY SEPARATION OF COAL 2.4.1 Historical Perspective The quantities of coal being beneficiated and the levels of beneficiation required are increasing while the quality of raw coals is decreasing. Coal is currently cleaned with the minimum of size reduction; fine particle processing, recovery, and tailings disposal are already major problems. Further, adequate water resources are not always available. Dry beneficiation is an alternative approach. Indeed, pneumatic density separation of coal was widely accepted between 1930 and 1960. The reasons of its downfall in 1960's are the inherent efficiencies of the separators, deterioration of mined coal quality, environmental problems, and higher percentage 28
Virginia Tech
(cid:120) No moisture penalty to the clean coal product. As the grading is based on ash and moisture percentage, moisture penalty is a major problem for suppliers in India. This advantage is particularly important if the amount of improvement needed is minimal. Also, the processing cost for upgrading coal with dry technology is often low. 2.4.2 Dry Coal Cleaning in India Interest in dry deshaling has increased significantly in recent years among all the major coal producer countries for cleaning the lignite and sub-bituminous coals. Fortunately, the deshaling is believed to be particularly well suited to Indian coals, which tend to contain a disproportionate amount of middling particles that are intermixed with both carbonaceous matter (coal) and inorganic minerals (rock). The coal feed stocks usually do not contain a large portion of liberated particles of carbonaceous matter that is relatively free of mineral constituents. As such, it is impossible to apply traditional washing practices which typically separate coal from rock at relatively low densities (i.e., 1.4-1.7 RD) without losing large amounts of carbonaceous organic matter. On the other hand, many of the feed coals in India contain a significant portion of liberated rock that is inadvertently added to the coal during the mining process. These particles, which are undesirable from both a cost and environmental perspective, have the potential to be easily removed from ROM coals by low-cost deshaling operations without sacrificing large amounts of carbonaceous organic matter. Honaker et al. (2006) also showed that the treatment of low ash run-of-mine coal to remove the small amount of rock using a dry separator prior to blending with washed coal has significant economical benefits. Differences in the inherent cleaning characteristics of various coal feeds can be easily illustrated using the washability data plotted in Figure 2.2. This diagram shows the weight and 30
Virginia Tech
ash distribution of two Indian coals as a function of density. The first coal illustrated in Figure 2.2 (a), which is less common in India, contains only a modest amount of middling particles in the relative density range between 1.60 and 1.85 RD. Traditional coal cleaning plants that make use of dense-medium circuits generally operate with a maximum separation density of about 1.65-1.70 RD. This type of plant circuitry can effectively separate the high-density, high-ash rock (>1.85 RD) from the low-density, low-ash coal in the <1.60 RD range. Unfortunately, many of the coal feed stocks in India have weight distributions that are skewed to the higher density range. These high-middling coals, such as those illustrated in Figure 2.2 (b), typically have large weight percentages in the 1.45 to 1.70 RD range. The high middlings content makes it difficult to produce clean low-ash products without discarding a large proportion of the carbonaceous matter (and heat value) present in these middlings. In addition, the large amount of near-gravity solids in the 1.65-1.70 RD range makes dense medium separators that are traditionally used in modern coal preparation plants much less effective. For many of the coal feeds in India, a better option for beneficiation is to focus on separating out and discarding only the high-density, high-ash fraction that has the greatest negative impact on the power generation cycle. These high-ash particles contain relatively little heat value, create ash disposal and fly ash emission problems, and unnecessarily consume resources in the transportation infrastructure. For the two coals shown in Figure 2.2, a deshaling plant operating at a high density (e.g., 1.85-2.00 RD) would be ideally suited to remove the high ash (72-76% ash) materials contained in the > 1.85 RD fraction. For these two coals, such a deshaling would reduce the tonnage of materials that would normally have to be transported, 31
Virginia Tech
Figure 2.2 Weight and ash distribution (a) low middling (b) high middling Indian coals (Honaker, 2007). burned and disposed of at the power station by about 25-30%. For the less difficult coal (Figure 2.2a), process simulations indicate that the efficient removal of material higher than 1.85 RD would decrease the feed ash to the power station from about 41% down to below 30%. Likewise, for the more difficult coal (Figure 2.2 b), the same process would decrease the feed ash from 45% down to 34%. Deshaling is expected to provide similar improvements for many of the coals feed stocks that are currently mined and consumed in India. Numbers of projects in recent years are undertaken in India to identify an efficient dry cleaning technology for thermal coal. In 2005, India installed its first dry processing demonstration unit, a 50 TPH capacity All-air Jig for beneficiation of coal at OCL India Ltd, Orissa. Other technologies like Air-dense Medium Fluidized Bed Separators, Magnetic separators, Electrostatic Separators, Optical Sorting process are only tested at laboratory scale (CMPDI, 2005-06). 32
Virginia Tech
2.4.3 Dry Beneficiation Methods The separation (wet or dry) of components in a mixture relies on some difference in the properties of the components. This may be an optical, physical/mechanical, magnetic, electrical, or surface/colloidal property. Mechanical beneficiation of coal is probably effective down to 1 mm particle size, while electrical and magnetic techniques are most applicable to fine coal below 1 mm (Dwari and Rao, 2007). The effective top size particle for most of the separators was around 2-inches and the effective size ratio for which good separation was achieved was between 2:1 and 4:1. It is noted that this effective particle size range is much smaller than most wet, density based separators. The reported probable error (Ep) values vary due to the particle size ranges treated. For example, within a small particle size range of 4:1, the Ep values ranged from 0.15 (cid:177) 0.25 whereas a 50:1 ratio provides values around 0.30. These values indicate that the air-based systems are much inferior in separation efficiency as compared to wet coarse coal cleaning units. However, dry coal cleaning devices typically have lower capital and operating costs, no waste water treatment and impoundment requirements, lower product moisture values and less permitting requirements. If a high density separation provides the desired effect on coal quality, dry cleaning separators are an attractive option. Several processing technologies used during the peak years of dry coal preparation have been recently modified and successfully commercialized. The All-air Jig, for example, is a modification of the Stomp Jig technology and is commercially represented by Alminerals Ltd. (Kelley and Snoby, 2002). The unit has been successfully applied in several applications within and outside the U.S. for coal cleaning (Weinstein and Snoby, 2007). Chinese researchers and manufacturers have applied basic fundamentals including computational fluid 33
Virginia Tech
dynamics to the redesign of dry particle separators including those employing dense medium and tabling principles. Various technologies based on material properties that define coal and mineral particles separation are explained below: Sorting The sorting operation of coal can be carried out by different techniques such as optical, radioactivity, microwaves, and nuclear magnetic resonance (Riedel and Wotruba, 2004). In sorting the property (appearance or color) usually only (cid:179)labels(cid:180) the components, while mechanical deflection, i.e. air jets separates the different labels. Dry sorting has been applied to a variety of minerals, using reflectance, photometric, infra-red, fluorescence, X-ray, radiometric, electrical conductivity, and magnetometric methods. Coal has been sorted by X-ray, gamma-ray, and electrical methods, and this field seems worthy of re-examination in the light of the modem high-speed technology used for sorting mineral ores. There is a significant effort to develop and improve on-line and in- situ ash analysis of coal using X-ray and nuclear techniques, and perhaps this technology could be adapted to reject high ash particles from coarser coal. Shape and Friability The Beresford plate separator is designed on the principle of friction and resilience (Mitchell, 1942; Horsfall, 1980). Sized coal particles are fed onto an inclined polished glass plate. Coal has greater resilience or "bounce" and does not remain in contact with the glass as compared to waste minerals. Therefore, coal acquire more speed and thrown further when it reaches to the end of the plate. Rotary Breaker is based on relative friability of the material. It achieves size reduction by repeatedly raising the coal material and dropping it against strong perforated screen plates 34
Virginia Tech
around the interior (Bhattacharya et al., 2004). Lumps are broken down and coal passes through screen-sized openings, thus act as a deshaler. Researchers showed its poor efficiency with lower rank coal and Gondwana coals. Tables and Jigs The particles settling rates in air are much higher than those in water. Pneumatic jigs or air tables separate particles when material is passes over a porous vibrating bed. The high density material falls to the bottom and is removed by the action of the bed while the less-dense moves by gravity across the direction of vibration of the bed. In case of air jig, material moves in the same direction and separation is achieved by positioning a splitter on the vertically stratified material (Stump, 1932). Eriez manufactured dry deshaling air table generates a helical motion with air stratification and produces multi products of clean coal, middlings and refuse. The technology is discussed in detail in later section. China has developed FGX dry cleaner separating deck (Gongmin, 2010) based on same principle and it was reported that the unit has been successful on wide range of coal types and implemented in more than 900 plants throughout China, Mangolia, Indonesia, North Korea, Ukraine, United States etc. Air Dense Medium Fluidized Bed Separator In the air dense medium fluidized bed separator, the gas-solid fluidized bed must have fluid-like characteristics. The upward air current automatically expands the bed surface to the same level. The bed behaves like a liquid. Particles with density less than the bed density float to the top of the surface of the bed, while particles denser than the bed density sink to the bottom of the container. For efficient separation, stable dispersion fluidization and micro-bubbles must be 35
Virginia Tech
achieved. The buoyancy of beneficiation materials and the displaced distribution effect play significant role in the fluidized bed (Luo et al., 2002). When a fluidizing medium is chosen of narrowly sized particles that are appreciably finer than the feed material to be separated, then the segregation seems to depend largely on density. The feed can span a wide size range, the limits being determined by the difficulty of maintaining the air velocity between the minimum needed to fluidize the largest particles and below the terminal settling velocity of the finest particles. Fine particles that are to be separated can themselves combine to form (cid:179)autogeneous(cid:180) fluidized medium, and they float or sink according to their size and density. Moisture also affects the fluidizing characteristics of particles, especially for fine particles, but this is a consequence of sticking and aggregation and is not a deficiency of the fluidization technique. Biswal et al. (2002) and Sahu et al. (2005) designed, developed, fabricated, and successes fully commissioned the unit for high ash non-coking coal of Talcher coalfields, Orissa. Magnetic Separator Coal is a weak diamagnetic material. A particle gets magnetized to some extent in presence of magnetic field and act as magnetic dipole. Magnetic separation may be used for coal beneficiation when the gangue minerals contain iron phase. The magnetic susceptibilities are very small for coal separations and strong magnetic field is required. Some of the iron containing minerals in coal are strongly paramagnetic and the major ash forming minerals. Significant ash reduction can be achieved by magnetic separation in the case. Magnetic separation of coal material can be done by two methods namely High-Gradient Magnetic Separation (HGMS) and Open-Gradient Magnetic Separation (OGMS). With the former, the separation is accomplished 36
Virginia Tech
by applying a large force over a short distance, while in the latter, a smaller force is applied over a much larger distance. Electro-static Separator Separation occurs under the influence of very high electric field. Prior to the separation stage, particles have to be electro-statically charged. For the separation of a mineral from the organic phases in coal are based on the differences in the ability of the two phases to develop and maintain charges in different types of separators. There are two such types of electrostatic processes. One uses the difference in electric resistivity while the other uses difference in the electronic surface structure (Kelly and Spottiswood, 1989; Mazumder et al., 1994). Conductive induction, Tribo-electrification and Ion or Corona bombardment are common commercial methods of electric separation. Coal is generally less conducting than mineral matter, except perhaps in the case of brown coal which has high water content and also often high ion content. Pyrite is the most conducting mineral that is commonly found in coal. Furthermore, vitrain is known to be less conducting than fusain and durain. In corona charging, all particles take the same sign of charge and it is the different rates of loss of this charge, which depend mainly on the relative conductivities, that permits separation while Tirbo-electric separation is based on the deflection of these differently charged particles in opposite directions. When the results of electrical separation experiments are analyzed, it is desirable to determine maceral as well as ash contents in the fractions, because otherwise organic material going into the high ash fraction might be interpreted as poor separation performance (Lockhart, 1984). 37
Virginia Tech
2.5 ERIEZ DRY DESHALING AIR-TABLE The Eriez dry deshaling air table uses the separation principles of autogenous medium and a table concentrator. The separating compartment is comprised of a deck, vibrator, air chamber and hanging mechanism (Figure 2.3). Feed particles introduced into the separating chamber are subjected to upward air flows that pass up through holes in the deck surface at a rate sufficient to transport and fluidize the particles. Particles in the fluidized bed contact the deck surface and are pushed from the discharge baffle plate to the back plate by the vibration- induced inertia forces. Upon striking the back plate, particles move upward and inward towards the discharge side of the table (Figure 2.4). Low-density particles of coal are lifted up the back plate at a higher elevation than the high-density particles of rock before turning inward toward the discharge point. These lighter particles create an upper layer that is collected along the length of the table as a clean coal and/or middlings. The denser particles of rock are forced by both vibration and the continuous influx of new feed material to transport in a helical transport pattern toward the narrowing end of the table. This action sorts the coal and rock particles along the length of the table. Figure 2.5 provides direct photographic evidence of the selective segregation of coal and rock using the air table separator. The air table requires several ancillary units to support its operation (see Figure 2.6). During a typical run, the flow of feed to the separating chamber is regulated using a feed bin and vibrating feeder. The separating chamber is totally enclosed in a housing that is connected via duct work to a bag filter and dust cyclones. The filter/cyclones are used to clean and recycle the air and to prevent dust from being emitted to the atmosphere. A draft fan and centrifugal blower are used to circulate air at the required rates. 38
Virginia Tech
attractive and cost-effective alternative to traditional wet-coal cleaning processes, particularly for green-field sites where coal cleaning operations are being utilized for the first time like in India, where only 4-5% of the coal is currently being cleaned. The data obtained from studies conducted in China indicate that the unit has the potential to provide an effective separation for particles as coarse as 80 mm to a lower size limit of around 3 mm. The operational data also indicate that the process is relatively insensitive to surface moisture up to a value of about 7-10% by weight. The previous test data also indicate that the Eriez dry air table separator had the ability to provide a relatively high separation density (RD ) of around 2.0 RD while achieving probable 50 error (Ep) values that range from 0.15 to 0.25 (Lu et al., 2003; Li and Yang, 2006). This level of performance provides high organic efficiencies approaching 97%. The capital cost for a 250 t/hr unit was reported to be less than one fourth (25%) that of a traditional wet-preparation plant design, with operating costs below US $0.30 per ton. These characteristics make the dry deshaling technology ideal for coal cleaning applications in India. The only downside discovered during the course of this field work was that the dry process is not capable of treating particles finer than about 3-6 mm. This finding suggests that additional R&D work is needed to develop other types of dry deshaling processes that can more effectively treat coal fines. 41
Virginia Tech
CHAPTER 3 EXPERIMENTATION 3.1 INTRODUCTION The objectives of the project work at different coal mines were to test advanced dry cleaning technologies on Indian steam coals to produce low ash products that can be more cleanly burned, reduce the amounts of fly ash generated, and minimize maintenance costs. Increasing the availability of high-quality coals will help India increase the net efficiency of coal use and facilitate implementation of state-of-the-art clean coal technologies to substantially reduce CO emissions. 2 During the past decade, Virginia Tech and University of Kentucky jointly tested a pilot- scale (5 t/hr) dry deshaling unit at various test sites in the United States. The equipment was found to be capable of efficiently removing shale from a coarse coal feed with particles in the size range of 50 x 6 mm. The technology is marketed by Eriez Manufacturing. An inherent advantage of this technology is that it is a dry separator, which does not incur in dewatering costs. The tests results obtained on a bituminous coal sample from Utah reduced ash content from 18 to 11% with a yield of 77%. With a bituminous coal from Virginia, a feed coal assaying 49% ash was cleaned to produce 21% ash clean coal, 74% ash middlings, and 89% rejects. With a Gulf coast lignite coal, the separator was able to reduce 54% of mercury by removing iron sulfide minerals. As a result of the successful test work, the company mining the lignite coal has installed two full-scale Eriez deshaling units, with a total capacity of 240 t/hr (Honaker and Luttrell, 2007). Virginia Tech served as the prime contractor and the University of Kentucky served as a subcontractor for this project. Eriez Manufacturing provided the pilot-scale unit for cleaning the coarse coal (60 mm x 0). Coal samples were provided by the different private and public sector 42
Virginia Tech
coal mining industries located in the eastern part of India, which is the richest coal region of the country. 3.2 SAMPLING AND CHARACTERIZATION Work performed under this subtask has focused on the identification of sites/seams in India that are the best candidates for the application of the dry deshaling technology. Factors considered in the final selection of test sites include (i) cleanability characteristics of the coal, (ii) lack of availability of water for competing processes, (iii) transportation distance between mine and consumption site, and (iv) willingness of producers to implement new technology. For the experimental portions of this work, guidelines were created by the project team for collecting the required characterization data (i.e., standards were established for sampling, sizing, float-sink testing and analytical determinations). Representative samples of 13 different coals from India were selected by Sharpe International for use in the characterization study. The characterization work for these coals included screening each sample into 50x25, 25x13 and 13x5 mm size fractions followed by float-sink testing of each size fraction at relative densities of 1.4, 1.5, 1.6, 1.7, 1.8, 1.9 and 2.0 RD. Tables 3.1-3.13 provide detailed listings of the size-by-size washability data for each sample. However, for ease of comparison, the float-sink data for the composite 50x0.5 mm material fraction for each coal is plotted in Figure 3.1. This composite plot shows that, despite washability characteristics that varied considerably from site to site, a significant proportion of high density material was present in nearly every sample. This finding suggests that relatively pure extraneous rock was present to some degree in nearly every coal, and that could be removed by a dry deshaling process. 43
Virginia Tech
To better compare the relative cleanabilities of the different coal samples, a separation index (SI) was calculated for each sample. The SI value was computed by summing the amounts of low density (<1.5 RD) and high density (>2.0 RD) fractions. Presumably, a large SI value would indicate that there were larger proportions of liberated coal and rock present in the sample that could be effectively separated, while a low SI value would indicate the presence of large amounts of near-density materials in the 1.6-2.0 RD range that could not be separated due to interlocking. The computed SI values, which are summarized at the bottom of Figure 4.4, ranged from a low value of 34 to a high value of 74. Four coals, which had SI values of 65 or greater, were considered to be excellent candidates for deshaling. Six other coals, with SI values between 46 and 58, were considered as good candidates also suitable for some degree of deshaling. Only three coals with SI values less than 40 were believed to contain inadequate liberated material for effective deshaling. As such, this set of data suggests that considerable potential does indeed exist for the application of dry deshaling technologies for upgrading Indian coals. 44
Virginia Tech
The pilot-scale Eriez dry coal cleaning unit and associated equipment were shipped from the U.S. in February 2009 and arrived at the Eriez factory in Chennai in May 2009. Following an extensive functional check-out period, the equipment was transferred by truck to Talcher, Orissa, arriving late July, 2009. Map locations of various test sites are shown in Figure 3.2. 3.3.1 Aryan Energy Private Limited The test site at Aryan Energy Private Limited is located at Talcher, Orissa in India. Typically the facility processes 2 million metric tons annually. The processing facility at Aryan Energy presently utilizes a dense medium vessel to produce a low ash product and uses the water-based barrel washer at times to deshale feed for a high-ash product. The company installed a shaking table technology from China several years ago, but the circuit has only provided an ash reduction of 5% due to inadequacies in the design of the equipment. The main source of run-of-mine (ROM) coal for the Aryan Energy processing facility was obtained from the Mahanadi coalfields. The beneficiation process includes size reduction and wet processing through barrel washers (280 TPH). The feed size to the washer is 60 mm x 0. The production depends on the market demand and the orders received. The major markets for the washed coal are power plants and sponge iron plants in Orissa. The two major customers are National Thermal and Power Corporation and Vedanta Resources. The coal received for testing in the dry deshaler unit is from two different seams of Mahanadi Coal fields, that is, Balaram OCP (Test 1 to Test 13) and Hingula OCP (Test 14 and Test 15). The coal contains highly carbonaceous material, less pyrite and mercury, but was high in ash percentage with large amount of near-gravity material. The test sample is soft (high HGI index) and has large amount of fine material (35% to 40%). Due to monsoon season, the percentage of surface moisture is 13%-17%. 60
Virginia Tech
The size distribution and washability analyses for the Balram seam are presented in Table 3.14 and Table 3.15. Likewise, the size distribution and washability analyses for the Hingula seam are presented in Tables 3.16 and 3.17, respectively. For convenience, the washability data for the two seams are plotted in Figure 3.3 and Figure 3.4. 3.3.2 Bhushan Power and Steel Limited Bhushan Power and Steel complex is located at Jharsuguda, Orissa in India. The complex has two washing facilities. Washery I processes coals 250 t/hr with a Batac Jig and Washery II processes 600 t/hr using dense media cyclones. The low ash product is used for four direct reduction iron (DRI) ore kilns, producing 500 metric tons per day (t/day) of iron. In addition to the four kilns, the integrated 1.5 million metric tons per year (t/yr) complex, a compact strip production (CSP) plant, blast furnace, coke oven plant, sinter plant, oxygen plant, steel making facility and lime and dolomite plant. The complex produces hot roll coil, steel billets, alloy steel rounds, tor steel, wire rods, pig iron, sponge iron, cold roll coils, cold roll sheets, precision tubes, cable tapes, black pipe and corrugated sheets. The high-ash product is used at the integrated power generation station. The main source of run-of-mine coal for the Bhushan's processing facility was obtained from the Mahanadi coalfields and IB Valley coalfields. There are more than 20 sources of different seams from where the coal is received. The feed ash for processing varies from 46- 54%. Unfortunately, the research team was unable to obtain any washability data for the coals that were evaluated during the pilot-scale test program. 61
Virginia Tech
3.3.3 Kargali Washery, Coal India Limited Kargali Washery, Central Coalfields Limited, a subsidiary of Coal India Limited, is located near Bokaro, Jharkhand, India, and is the largest coal producing company in India. The Kargali testing site typically processes 2.72 million metric tons annually. Commissioned in 1958, the facility is one of the oldest coal washeries in India. The main source of run-of-mine (ROM) coal for the Kargali Washery processing facility is obtained from open pit mines in the Karanpura and Bokaro coalfields. The beneficiation process includes size reduction, dry screening and wet deshaling through a ROM Jig. The feed size to the washery is 80 mm x 0. The feed is dry screened at 50 mm. The plus 50 mm material reports to the jig which produces a clean coal at 30% yield. The dry minus 50 mm product is mixed with the jig clean coal to produce a 34% ash product at 81% overall yield. The production is transported by rail to a nearby Bokaro Electric Power Corporation generating station. The ROM coal received for the testing session was from the KMP, Tarmi, and Dhori seams. These coals are a high-ash, low sulfur, moderately-carbonaceous material, and exhibit a high percentage of near-gravity material. The coals have a medium Hardgrove Grindability Index (HGI). The surface moisture will typically be in the 3-7% range, with the moisture increasing to 13-17% during the monsoon (rainy) season. The size distribution and washability analyses for a sample of the three seams are presented in Table 3.18 and Table 3.19, respectively. For convenience, the washability data for the sample is plotted in Figure 3.5. 3.3.4 Tata Steel, West Bokaro Tata Steel is already operating heavy media washeries at West Bokaro (Washery 2 and Washery 3) for producing coking coal. This company is planning to put up a new 800 t/hr 65
Virginia Tech
3.4 PILOT SCALE TESTING The dry coal cleaning unit is based on a shaking table technology augmented with fluidizing air through the bed of the table. A typical air table circuit arrangement is presented in Figure 2.3. The circuit circulates approximately 90% of the fluidizing air back to the table, leaving about 10% of the air to provide a negative pressure within the unit for dust control. A sketch of the shaking table is presented in Figure 2.4. The pilot-scale Eriez dry coal cleaning unit and associated equipment were shipped from the U.S. in February 2009 and arrived at the Eriez factory in Chennai in May 2009. Following an extensive functional check-out period, the equipment was transferred by truck to Talcher, Orissa, arriving late July, 2009 and to Jharsuguda, Orissa, in August 2009. The equipment was returned to Chennai and again transferred by truck to Kargali, Jharkhand in March 2010 for second phase pilot scale testing. 3.4.1 Pilot Scale Set-up A 5 t/hr pilot scale air table separator unit utilizing a 1 m2 air table was used at the various test sites. In all cases, the ROM feed to the unit was supplied directly from the mine or pre-screened to remove the minus 6 mm material from the feed prior to treatment. Underground coal sources required pre-screening due to the relatively large amount of fines and the amount of surface water present due to dust suppression activities. Feed coal was fed to the bin shown in Figure 3.6 by a conveyor or front-end loader. Feed from the feed bin was controlled using a vibratory feeder and transferred via a conveyor belt to a hopper that feeds an internal screw conveyor. The screw conveyor feeds an internal hopper that subsequently feeds the back right corner of the table (Figure 3.7). Upon entry of the feed onto the table deck, the upward flow of air creates a fluidized-bed of fine reject which causes the light coal particles to migrate to the top of the particle bed. The 68
Virginia Tech
coal moves toward the right front part of the table due to the downward slope of the table (e.g., typical slope is 8 degree downward from the back of the table to the front). The high-density particles ride on the table surface which is vibrated at a preset frequency. The vibration drives the high density particles to the back of the table where the particles are forced by mass action to travel toward the left side of the table. The particles that overflow the front of the table are directed into the product, middling or tailing bins by splitters that are adjusted to achieve a clean coal product and high-ash content tailings. The material in each of the three bins is transferred by conveyor away from the dry deshaling unit. 3.4.2 Shakedown Testing Two rounds of testing were conducted using the pilot-scale test unit. The first round was performed at the Aryan Energy and Bhushan test sites. Several major problems were encountered during equipment setup at the Aryan Energy site due to the long truck transport (approximately 1,400 km) of the pilot-scale test unit from Chennai. One of the main table support cables had broken, the internal feeder supports had detached from the overhead mount, various pieces of chute work had been bent and the electrical controls had to be secured in the control panel. After almost two days of making repairs, the unit was ready to process feed coal. After completing work at the Aryan Energy site, the field research team traveled to the Bhushan test site following the transport of the deshaler unit to the test site. Several delays were encountered during the set up of the unit, including finding an appropriate location at the raw coal stockpile area and heavy monsoon rains for the first two days of testing. 69
Virginia Tech
3.4.3 Detailed Testing The pilot-scale test unit was operated in continuous mode by filling the feed hopper with material. The main fluidizing air fan and separating table were started and the feeding equipment was started once the fan and table were operating at the specified speed. After the unit was operated for several minutes to attain steady-state conditions, three increments of the feed and table products were collected for each sample. The sampling arrangement provided a method for sampling the table products in six sections using a collection device designed to collect the material exiting the edge of the table into six different splits. The splits allowed the quality of the material exiting the table to be evaluated as a function of table length. After the three increments were collected, the feed, the table, and the fan were stopped while the sample components were weighed and tagged in preparation for laboratory analysis. Each sample was then screened at 6 mm size screen. Separate analysis for plus 6 mm and minus 6 mm was conducted. It should be noted that the results reported in the next chapter are only for plus 6 mm particles, as the pilot scale unit is meant to treat coarse particles only (the fine coal testing results are reported in the U.S. Department of State report, S-OES-07-APS-0001, as a separate study). The dust was collected from the bag-house dust collector after each sample period. The operating parameters were then adjusted for the next sample, and the operating/sampling cycle was repeated. The major operating variables examined during the testing sessions were the table slope (length-wise angle), the table oscillating frequency, and the main fan motor speed (Honaker and Bratton, 2009). Data obtained from previous testing (Honaker et al., 2008) of the pilot-scale unit indicated that the table cross-wise angle had minimal influence on the separation performance and could be held constant. 72
Virginia Tech
CHAPTER 4 RESULTS AND RECOMMENDATIONS 4.1 INTRODUCTION The dry deshaling technology is popular in various parts of the world because of several reasons mentioned in previous chapters. In the United States, interest in the technology has increased significantly in recent years. Honaker et al. (2006) also showed that the treatment of low ash run-of-mine coal to remove the small amount of rock using a dry separator prior to blending with washed coal has significant economical benefits. The dry deshaling air-table separator provides a dry, density-based separation that utilizes the combined separating principles of an autogenous fluidized bed and a table concentrator. The dry cleaning process has been evaluated at different mining operations across India for the treatment of run-of-mine coal and coarse coal reject of all ranks. The objectives of the test programs at each site varied and included (1) the production of clean coal having qualities that meet 34% ash specification of feeding thermal power plants and (2) maximization of the amount of high-density rock rejected prior to transportation and processing. A 5 t/hr pilot-scale unit of dry deshaling air-table separator was installed and a detailed parametric study was performed at each site to ensure that optimum performances for each coal were obtained. The dry deshaling pilot scale test work in India is done in two rounds. In Round (cid:177) I testing, the unit is tested at two different field sites at Aryan Energy Private Limited and Bhushan Power and Steel Limited. The coal sample source at the first site was from Mahanadi Coalfields region, while at the second site, the sample source is a mixture of Mahanadi and IB Valley coalfields (mostly sub-bituminous coal, see Figure 2.1). The Round (cid:177) II testing has been done at 73
Virginia Tech
Kargali Washery of Coal India Limited. Coal samples belong to North Karanpura Coalfields. Mid-Coking coal samples from Tata Steel, West Bokaro has also been tested at the same site. 4.2 RESULTS AND EVALUATION 4.2.1 Pilot-Scale Testing (cid:884) Round I A total of 15 tests were conducted at the Aryan Energy facility using the pilot-scale deshaling unit. The ROM feed ash for all the tests ranged from 48-52%. The test results for plus 6 mm material, which are summarized in Table 4.1, indicate that this particular coal feedstock responded well to dry deshaling technology (Note that the data from tests 1 and 2 could not be reported due to problems associated with laboratory analyses of these particular samples). The minus 6 mm material which represented 11.4% of the feed, and contained 45% ash are not reported in this study. With the exception of the combination of test parameters evaluated in test run 6, the deshaling unit rejected a substantial amount of high-ash (>60% ash) material in all cases with only modest losses of combustible matter. In test run 14, the deshaling unit rejected material as high as 66% ash with a product yield of about 80% and with only about 13% loss in combustible matter (i.e., approximately 87% energy recovery). A total of 13 tests were conducted at the Bhushan Power & Steel complex. The raw feed ash content for these coal samples ranged from 50-54%. The test results for plus 6 mm material, as shown in Table 4.2, show that the combustible recovery obtained for this particular coal was as low as 80% at 74% yield and as high as almost 93% at 87% yield. The material rejected in these tests was as high as 77% ash with a product yield of about 87% and with only about 6% loss in combustible recovery (test run 10). The minus 6 mm particles which represented 10.9% of the feed with an ash content of 52%, is not included in the study. The variation in performance is due to the different settings for the operating parameters for the unit. 74
Virginia Tech
The testing sessions at the Aryan Energy and Bhushan Power & Steel complex sites successfully demonstrated that the dry coal cleaning technology from Eriez Manufacturing was capable of removing high-ash material from the raw feed presently being processed by these coal groups. At Aryan Energy, the technology successfully met the original project goals of recovering 85-90% of the coal heating value, while rejecting significant amounts of incombustible high-ash (greater than 65% ash) rock. Similarly, at Bhushan Power & Steel complex, the unit met the original project goals of recovering more than 90% of the coal heating value while rejecting significant amounts of incombustible high-ash (greater than 70% ash) rock. Composite performance plots of all results from two test sites are summarized in Figures 4.1 and 4.4 for the Aryan Energy and Bhushan Power and Steel sites, respectively. These plots show the combustible recovery as a function of clean coal ash for each series of test runs. The full performance curve was plotted in each case using samples collected at different points along the length of the separating table. In addition, breakout plots for six of the best test runs at each site are summarized in Figures 4.3 and 4.6. These plots provide additional insight into the degree of separation since they show both the combustible recovery versus clean coal ash percent as well as the expected ash rejection versus reject ash content. Finally, Figures 4.2 and 4.5 are included to show the best overall performance levels expected for these two coals. For the Aryan Energy test run 14, it was possible to attain 90% combustible recovery while maintaining a reject ash of about 68% and a yield to reject of about 17%. Likewise, for Bhushan test run 10, the data indicate that a reject ash of over 75% would be produced with about 18% yield to reject at a 90% combustible recovery of coal to the clean product. As such, both sets of results suggest that the dry coal deshaling technology is a viable option for the removal of high-ash impurities while still maintaining high recoveries of valuable combustible matter. 76
Virginia Tech
4.2.2 Pilot Scale Testing (cid:884) Round II A total of 21 tests were conducted to evaluate coals delivered to the Kargali Washery facility, but the corresponding laboratory analysis was obtained for only 14 tests. The 75 mm x 0 feed coal contained 52 to 59% ash, from which minus 6 mm represented 48%; the corresponding ash content was 41%. The results for plus 6 mm material, as shown in Table 4.3, indicate that the combustible recovery for this particular coal ranged from 81% to 94%, with corresponding clean coal yields from 68% to 87%. The data clearly indicate that high-ash (greater than 70% ash) material can be rejected by the dry coal cleaning technology with a minimal loss in combustible recovery. Depending on the settings of the operating variables for the unit, the rejected material could be as high as 86% ash with a clean coal yield of about 80% and only about 7% loss in combustible matter (i.e., approximately 93% energy recovery, test run 11). Table 4.3 Ash rejection data for Kargali Washery. Test Combustible Reject Reject Run Recovery % Ash % Yield % 7 89.61 83.40 27.77 8 93.38 79.00 16.17 10 85.89 77.89 27.19 11 92.57 85.92 20.73 12 94.43 84.99 16.15 13 85.65 71.09 25.13 14 89.81 82.87 25.07 15 93.89 81.55 16.64 16 81.26 74.47 31.54 17 95.19 83.10 12.64 18 91.03 78.81 20.07 19 89.48 76.56 21.15 20 92.03 80.00 20.15 21 88.84 79.26 24.74 Average 90.22 79.92 21.80 81
Virginia Tech
percentage of finer material in the feed stream appeared to aid the separation process by enhancing the fluidization of the bed. Unfortunately, the majority of the finer material reported to the clean coal stream regardless of quality. It was also observed that the longitudinal slope, table frequency, and fluidizing air volume were the most critical operating parameters that control both coal recovery and product grade. When the feed coal contains large amounts of high-density rock, a lower slope angle provided less resistance for the rock to move towards the reject discharge end of the table. The table frequency and fluidizing air volume parameters also required adjustment to provide the proper movement and optimum separation environment for high-ash feed materials (Bratton, 2010). 4.3 FEASIBILITY ASSESSMENT 4.3.1 Technical Feasibility The testing session at the Aryan Energy and Bhushan complex site successfully demonstrated that the dry coal cleaning technology from Eriez Manufacturing was capable of removing high-ash material from the raw feed presently being processed by these coal groups. At Aryan Energy, the technology successfully met the original project goals of recovering nearly 85-90% of the coal heating value while rejecting significant amounts of incombustible high-ash (greater than 65% ash) rock. Similarly, at Bhushan complex, the unit met the original project goals of recovering more than 90% of the coal heating value while rejecting significant amounts of incombustible high-ash (greater than 70% ash) rock. In the second round of the pilot scale testing at Kargali washery (CIL), the unit again successfully demonstrated that the dry coal cleaning technology was capable of removing high- ash material from the raw feed presently being processed by the washery. The technology successfully met the original project goals of recovering nearly 85-90% of the coal heating value 87
Virginia Tech
rock reduces grinding demands, improves boiler efficiency, increases carbon burn-out, reduces slagging/fouling problems, decreases erosion rates and lowers bottom ash/fly ash loads. The economics of deshaling for steam coal applications are understood on the basis of an improved heating value and the fact that utilities pay on the basis of US $/MMBtu (1 million BTU) rather than US $/ton. Consider a situation where a coal with a heating value of 12500 Btu/lb is worth US $50/ton. The total heating value for the coal is 25 MMBtu (= 1 ton x 2000 lb/ton x 12,500 Btu/lb). As such, the monetary value of the coal is US $2 per MMBtu (= $50/25 MMBtu). Thus, improving the heating value through deshaling provides the potential to significantly improve revenue (Honaker and Luttrell, 2007). 4.3.3 Environmental Benefits It is also important to note that the direct cost benefits provide a significant financial incentive for the implementation of dry coal beneficiation processes that also have positive impacts for the environment. The removal of high-ash rocks will reduce the emissions of many air toxics since potentially hazardous air pollutant precursors typically associate with the mineral components in run-of-mine coals. The disposal of rocks prior to combustion provides a waste that is coarser and substantially less reactive in the environment than the high-surface area ash from the combustion process. Furthermore, the use of a dry process eliminates potential hazards that are commonly associated with wet processes that are traditionally used in modern coal preparation facilities. The use of beneficiated coals may increase thermal efficiencies by up to 4-5%, which can provide a corresponding reduction in CO emissions by up to 15%. For a longer term 2 perspective, the use of deshaling processes to increase the availability of higher-quality coals will 89
Virginia Tech
Extensive testing would be required to identify the coals that would achieve the best separating efficiency. The CIL processing facility at Kargali presently utilizes a ROM jig to produce a deshaled low ash product which is combined with the minus 50 mm high ash dry raw product. The Eriez dry coal cleaning technology could be used to deshale the dry raw screen undersize material (minus 50 mm) to remove high ash rock presently in the dry bypassed material. In addition, processing the oversize material (presently treated in the jig) without water would greatly enhance the thermal heating value of the clean coal product. The existing deshaling jig and associated equipment could be replaced with the technology demonstrated during the testing session. The dry coal cleaning technology could also be used to pre-treat the feed to a conventional coal processing facility by removing high ash rock. Removing high ash rock from the feed will provide a higher clean coal production for the same raw coal feed rate. The deshaling operation can be located at the mine site, which will eliminate transporting the rock and will reduce transportation cost. The deshaled reject material can be used for mine backfilling. 91
Virginia Tech
CHAPTER 5 MODELING OF DRY DENSITY BASED SEPARATION METHOD 5.1 INTRODUCTION Dry Deshaling of coal is a density-based pneumatic separation process. The Eriez manufactured deshaling air table generates a helical motion with air stratification and produces multi products of clean coal, middlings and refuse. The mixture of air and very fine coal particles from the feed act as a medium when suspended by the combined action of shaking and vibrations. The technology is economically beneficial to coal industry and has tremendous scope for future work; therefore, a model of the process would greatly benefit coal producers and researchers. The separation efficiency of the process is highly depends on many variables such as physical properties of the particles, such as shape and size, and the air table operating parameters which includes the horizontal and vertical angles of the table, table vibration, and the main-fan speed (Honaker and Luttrell, 2007). Because of a large number of particles involved with randomly distributed sizes and shapes, it is impossible to completely describe this complex system from first principle considerations. However, it may be possible to gain useful insight into the operational behavior of the process through the use of process engineering tools if some simplifying assumptions are made. The model developed in this study uses discrete element modeling to describe the separation behavior of the particles on the density based dry deshaling process. The coupled modeling technique allows relaxation of simplifying assumptions related to the particle-phase tensor. Collisions are treated on a mechanical basis and leads to more realistic reproduction of 92
Virginia Tech
(Figure 5.1). In case of multiple bodies, the approach depends on an efficient scheme for identifying contacts and time tracking. The force- displacement law relates the relative displacement between two entities in contact to the contact force acting on the entities. The force-displacement law operates at a contact and can be described in terms of a contact point, x[C], lying on a contact plane that is i defined by a unit normal vector in the ith time step, n. The contact point is within the i interpenetration volume of the two entities. For ball-ball contact, the normal vector is directed along the line between ball centers. For ball-wall contact, the normal vector is directed along the line defining the shortest distance between the ball center and the wall. The contact force is decomposed into normal component acting in the direction of the normal vector and a shear component acting in the contact plane. The force-displacement law relates these two components of force to the corresponding components of the relative displacement via the normal and shear stiffnesses at the contact. Figure 5.1 Notation used to describe ball-ball contact. 95
Virginia Tech
important parameter that defines the migration of the particle upward or downward. If (cid:591) > (cid:591) (cid:2926) (cid:2911)(cid:2926)(cid:2926) the particle will move downwards, while if (cid:591) < (cid:591) the particle will move upward. (cid:2926) (cid:2911)(cid:2926)(cid:2926) 5.3 DISCRETE ELEMENTAL MODELING Numerical methods for computing the motion of objects are used in many research applications today. Some methods focus on the problem as a holistic, or continuum. Other methods focus on many tiny pieces within the system in order to represent the system in a model. Discrete element method (DEM) is a numerical method for computing the motion of a large number of particles that represent a system. DEM was first applied by Cundall (1979) in order to address issues in rock mechanics. Cu(cid:81)(cid:71)(cid:68)(cid:79)(cid:79)(cid:182)(cid:86)(cid:3)(cid:82)(cid:87)(cid:75)(cid:72)(cid:85)(cid:3)(cid:72)(cid:68)(cid:85)(cid:79)(cid:92)(cid:3)(cid:86)(cid:87)(cid:88)(cid:71)(cid:76)(cid:72)(cid:86) focused more on microscopic and macroscopic characteristics of many tiny discs. In one paper, the behavior of soil was modeled (Cundall, 1983) using a two dimensional system that mainly determined material behavior after applying many different exterior boundary conditions. The program Cundall used during this time was titled BALL. This program appears to be much like the simpler 2D DEM programs used today. Williams and Hocking in 1985 explained that DEM was more of a generalized finite element method. This is true in that DEM looks at the individual particles within the system and provides data according to each individual sphere or circle. The finite element method breaks down objects or flows into many small spheres or circles, but only does this to relate to the system as a whole. The finite element method is more of a continuum model, while DEM treats each particle within the model as its own, or as many tiny finite elements (Williams et al., 1985). While DEM uses spherical or round particles in its models, the method is actually capable of modeling particles with non-spherical shape. This type of modeling is usually done 99
Virginia Tech
A PFC3D simulation is started by developing a model outline. When first developing a model, the schematics must be inserted. This schematic includes any walls or particles that are necessary in representing the system to be modeled. Information can be inserted into PFC3D either through a command line or through an input file coded for PFC. Once all particles and walls have been properly entered, the program will run a series of cycles. Each cycle performs the specific calculations stated by the programmer and allows the particles to interact accordingly. After each cycle is complete, the computer takes the new particle data and starts a new cycle. The cycling is then repeated until the simulation is complete. Information requested can then be collected for analysis (Itasca Consulting Group, 2004). 5.4 PFC3D SIMULATION DESIGN The simulation process has been developed in several steps, each steps being described in the following sections. 5.4.1 Development of Geometry and Vibrating Bed The entire geometry of the unit is designed in PFC3D package. The general wall-logic used in PFC3D illustrates how the force-displacement law is applied at contacts between balls and general walls. If the ball contacts the general wall, the contact force is updated using the law. In the present study, the contact detection procedure for the general wall-logic is described for the case of cylindrical wall. The cylindrical-vibratory bed structure (Figure 5.2) designed in PFC3D is essentially mimics the perforated vibratory bed of an air table. The programming code was used to construct a virtual three-dimensional cylindrical structure with open faces at both ends. The fixed height of 3.0 m and normal stiffness of 2.25 x 107 N/m for the wall was considered. Variable diameters 101
Virginia Tech
Table 5.1 Ball properties for vibratory bed. Ball Properties Values Density of each ball 4000 kg/m3 Normal stiffness 1 x 108 N/m Shear stiffness 1 x 108 N/m Parallel bond radius 0.04 m Parallel bond normal stiffness 1 x 104 N/m Parallel bond shear stiffness 1 x 104 N/m Parallel bond normal strength 1 x 102 N/m Parallel bond shear strength 1 x 102 N/m 5.4.2 Generation of Particles An assembly of 1200 spherical particles was generated inside the cylindrical-vibratory bed region. The locations of the particles were chosen randomly inside the cylinder geometry to make them fall under the force of gravity on the vibratory bed. Each spherical particle was assigned a unique ID number in the model. The particle properties are shown in Table 5.2. The type of particles was assigned in the code as per their density ranges, with each of these ranges assigned specific colors to visualize the separation in the model. Usually, the feed in dry air table separator contains a large percentage of low-density coal mixed with high-density rock material. Similarly, the model simulations were run with similar kind of feed. Mono-size particle of radius 25 mm (0.025 m) was considered in most of the model cases to make the simulations easier. The surface of the particles is considered smooth. Since preliminary tests show that friction increases simulation time and also affects separation efficiency, a zero coefficient of friction was considered to ease the simulation. 103
Virginia Tech
5.5 MODEL CALCULATIONS METHOD In each model case, the simulations were run up to 2.5 million cycles, where each cycle represents a fraction of second in the real world. The results were obtained after every 500,000 cycles to study the process of segregation with time (Figure 5.4). In process of obtaining the results, locations of each particle were traced after every 500,000 cycles in a test (*.txt) file. Once completed, the particles were then sorted and filtered as per their location in Z direction along the bed height by using Microsoft Excel 2007. Furthermore, the numbers of particles were then counted in a certain level of pre-defined height according to their density ranges. From these values, partition curves are developed to study the efficiency of separation in the model. The pre- defined height levels in the model were intended to represent the six different splits obtained at the discharge end of the dry deshaling air table separator as shown in Figure 5.3. Table 5.2 Model simulation characteristics for spherical particles. Type of particles Color Number Density Normal Shear assigned in code of of stiffness stiffness model particles particle (N/m) (N/m) (Kg/m3) Coal1_particles Red 500 1200 106 106 Coal2_particles Blue 100 1500 106 106 Middlings_particles Green 100 1800 106 106 Rock_particles Yellow 500 2400 106 106 104
Virginia Tech
forces are implemented in the model. Similarly, partition curves were developed when only buoyancy and gravity force (Case 2), drag and gravity force (Case 3), and only the gravity forces (Case 4) were applied. The E (Ecart Probable) values have also been calculated based on these p partition curves and compared with the real world data. E is defined as an index of separation p and the value gives an indication of the quantitative errors inherent in the process at a given separation density (Osborne, 1988). The lower the E value, higher the separation efficiency. p Mathematically, the E is defined as half the difference in densities corresponding to partition p probabilities of 25% and 75%, i.e.: E (cid:32)(cid:3)(cid:11)(cid:545) (cid:177) (cid:545) ) / 2 [10] p 25 75 Table 5.3 shows a comparison of E values of real world data of dry air table and All-Air p Jig with the model values. Differences in how air enters the bed (i.e., constant air-flow for the air table and pulsated air-flow for the air jig) was not considered in the model. The E value p obtained with the model simulation in Case 1 (when all forces are implemented) was close to the experimental values and would be expected to be even better if the air flow is effects were included in the model. Table 5.3 Ecart Probable (E ) values for the partition curves based on different forces. p Cases Forces Ecart Probable (Ep) Dry Air table --- 0.17-0.25 (a) All Air Jig --- 0.29-0.33 (b) Model-Case 1 Gravity, Buoyancy & Drag 0.30 Model-Case 2 Gravity & Buoyancy 0.35 Model-Case 3 Gravity & Drag 0.59 Model-Case 4 Gravity 0.64 (a) (Honaker and Luttrell, 2007); (b) (Killmeyer and Deurbrouck, 1979; Kelley, 2002) 109
Virginia Tech
Table 5.4 Variation in Ep with particle size based on model simulations and field tests. Cases Ecart Probable Particle sizes (Ep) Dry Air table 0.17-0.25 (a) All Air Jig 0.29-0.33 (b) Mono-size 2 inch 0.31 Mono-size 1 inch 0.31 Mono-size ½ inch 0.40 Mono-size 3/8 inch 0.58 Variable size 1-1/2 inches 0.35 Variable size 2 - 1 inches 0.38 (a) (Honaker and Luttrell, 2007); (b) (Killmeyer and Deurbrouck, 1979; Kelley, 2002) 5.6.3 Effect of Bed Depth One of the important parameters that affect the separation efficiency is the bed height. Numerous simulations were conducted to show the effect of this variable. Since the number of particles used in the model was held constant, the bed height was increased by reducing the radius of cylindrical geometry. To study the influence of bed height, the other simulation parameters (such as frequency, amplitude of velocity and viscous damping) were kept constant to their optimum values in the model. Figure 5.9 shows the partition curves obtained for simulation results run with different bed depths that ranged from 1.2 m to 0.12 m. The results show a decrease in separation efficiency when the bed height increased or decreased from a certain limit. The limit was defined for a particular condition only. Also, it was observed that the separation reached steady state at a lower number of cycle runs when the bed height was lower. The results shown in the plot were for a constant number of cycle runs in all the cases. 112
Virginia Tech
Table 5.5 Variation in separation efficiency by changing the bed height. Cases Ecart Probable Variable bed depths (Ep) Dry Air table 0.17-0.25 (a) All Air Jig 0.29-0.33 (b) Bed depth 1.2 m >1 Bed depth 0.6 m >1 Bed depth 0.35 m 0.40 Bed depth 0.32 m 0.37 Bed depth 0.28 m 0.27 Bed depth 0.2 m 0.32 Bed depth 0.16 m 0.37 Bed depth 0.12 m 0.19 (a) (Honaker and Luttrell, 2007); (b) (Killmeyer and Deurbrouck 1979; Kelley 2002) Table 5.5 shows that E values greater than 1 were obtained for a higher bed depths (bed p depths equal to 1.2 m, 0.6 m), which indicates no separation occurred. The separation efficiency increased and reached its optimum value, then again decreases as the bed depth decreased. At bed depth of 0.12 m, the value of E is much lower. However, the partition curve in Figure 5.9 p shows considerable misplacement, or bypass, of low density material in the high-density material for this condition. 5.6.4 Effect of Frequency and Amplitude of Velocity of Vibrating Bed The model simulations were also run at variable amplitude of velocity and frequency values of the vibratory table to study their effect on the density-based separation process. For variable amplitude of velocity, parameters such as frequency (17.25 Hz) and viscous damping (0.07) were kept constant. Similarly, for variable frequency, the amplitude of velocity (0.5 m/s) and viscous damping were kept constant. 114
Virginia Tech
the transportation of the material (or retention time of the particle) on the dry deshaling air table. As the model geometry was designed in consideration of batch tests only, it can be expected that the variation in frequency in the model will not show significant effect on the separation efficiency. This statement is based on visual observations and therefore evidences are required for a decisive conclusion. Work from earlier researches showed that the higher the frequency of the vibratory bed, lower the separation efficiency of the process (Khan and Smalley, 1973). Also, the field test work conducted with dry deshaling air table in the current study indicated that the frequency of the bed, in combination with the air flow through the perforated bed and the horizontal slope of the vibrating bed, have a synergistic effect on the separation efficiency (Honaker and Luttrell, 2007). As the other two parameters are not incorporated in the model, there is a need for further investigations to study the variation of frequency and its effect on the separation process. Table 5.6 Comparison of Ep values at variable amplitude of velocity and variable frequency of the vibratory bed with dry air table and All-Air jig. Cases Ecart Probable Cases Ecart Probable Variable amplitude (Ep) Variable Frequency (Ep) of velocity Dry Air table 0.17-0.25 (a) Dry Air table 0.17-0.25 (a) All Air Jig 0.29-0.33 (b) All Air Jig 0.29-0.33 (b) 0.25 m/s 0.53 5 Hz 0.30 0.5 m/s 0.26 12.5 Hz 0.28 0.625 m/s 0.32 14 Hz 0.28 0.75 m/s 0.33 15.5 Hz 0.29 1.0 m/s 0.56 17.25 Hz 0.26 1.5 m/s 0.58 25 Hz 0.27 (a) (Honaker and Luttrell, 2007); (b) (Killmeyer and Deurbrouck, 1979; Kelley, 2002) 116
Virginia Tech
1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00 1.00 1.20 1.40 1.60 1.80 2.00 2.20 2.40 2.60 Figure 5.11 Partition curves showing variation in separation efficiency at different values of frequency of the vibratory bed. 5.7 CONCLUSION The mechanism of segregation of particles under the influence of vibration has been much studied, but not yet completely understood. It is generally agreed that segregation based on particle size and density are the most prevalent in the mining industry. In order to reduce the complexity of the segregation phenomena, the discreet elemental analysis was conducted in order to better understand the effect of some of the critical parameters in a density-based separation process under the influence of sinusoidal vibration. Particle Flow Code in three dimensions (PFC3D), a well known discreet element modeling software package, was used to develop the model for the segregation process. The model was then used to study the separation efficiency of the process under a wide range of different operating conditions. The model results 117 tcudorP ot ytilibaborP Frequency of vibration is in Hertz (Hz) Relative Particle Density Freq 5.0 Freq 12.5 Freq 14.0 Freq 15.5 Freq 17.25 Freq 25
Virginia Tech
were also compared to real-world separation data obtained from an Eriez air table and All-Air jig. The three important forces, i.e. force of gravity, force of buoyancy, and drag force are studied in the model. Partition curves were developed to study the separation efficiency of the segregation phenomena. The modeling work showed the importance of the buoyancy force and drag force in the process of segregation. The model was used to investigate the effects of particle size and density on separation performance. From the results obtained from the model simulations, it is confirmed that coarser particles were segregated with higher separation efficiency. Also, it was observed that denser particles segregated less than lighter particles under similar conditions. Important operating parameters, such as vibration frequency, amplitude velocity and bed height, were also studied using the model. The simulation results showed the selective separation of mono-size particles was optimum only under a certain range of amplitude and bed height. Values lower or higher than this optimum range drastically reduced the separation efficiency of the process. The frequency of vibrations, on the other hand, did not have much of an influence on the separation efficiency. This result contradicts real-world data and, as such, further investigations are required to understand the effect of frequency in the model. While the modeling exercise did not explain all the paradoxes and puzzles of segregation under vibrations, it successfully provided a platform to establish further work to study the effect of some important parameters like particle shape, horizontal and vertical bed angles, and friction (rough particle surface), which were not considered in the development of the current simulation model. 118
Virginia Tech
CHAPTER 6 FINAL SUMMARY The air table dry separator technology was evaluated at coal mining sites in India for the treatment of Run-of-Mine (ROM) coals. A (5 t/hr) prototype pilot-scale deshaling unit was installed at three test sites and detailed parametric studies were conducted to evaluate the separation performance for coals at each site. The primary objective of the test program was to produce clean coals having qualities that meet the market specifications while maximizing the amount of high density rock rejected prior to transportation and processing and minimizing the loss of combustible material. Specifically, the work was performed to (i) establish the suitability of the dry deshaling technology for upgrading Indian coals with difficult washabilities, (ii) define the operational capabilities of the technology in terms of coal recovery and quality for typical Indian coals, and (iii) to determine the economic viability of this approach for the coal markets that currently exist in India. Tests conducted at the Aryan Energy facility in Talcher and the Bhushan complex in Jharsuguda successfully demonstrated that high-ash rock (>60% ash) could be readily rejected from the plus 6 mm ROM coals using the deshaling technology. At Aryan Energy, the technology successfully met the original project goals of recovering nearly 85-90% of the coal heating value while rejecting significant amounts of incombustible high-ash (> 65% ash) rock in the plus 6 mm fraction. Similarly, at the Bhushan complex, the unit met the original project goals of recovering more than 90% of the coal heating value, while rejecting significant amounts of incombustible high-ash (>70% ash) rock in the plus 6 mm fraction. The tests conducted at the Kargali Washery indicated similar results with rejection of 15-25% high ash material (>70% ash) and the loss of only 4-10% of combustible material in the plus 6 mm fraction. The tests 120
Virginia Tech
conducted on a semi-coking coal indicates the removal of 20-25% of high ash rock (>40% ash) from the ROM coal with a 97% recovery of the combustible material in the plus 6 mm fraction. As a result of all the on-site testing, it is evident that the air table deshaling technology offers major benefits for improving the quality of coal consumed for electrical power generation and steel manufacturing. Reductions in the amount of high-ash incombustible rock will cut transportation costs, improve utilization efficiency, lower greenhouse gas emissions, and reduce the release of unwanted particulates and elements of environmental concern to the atmosphere. A discreet elemental model in three dimensions was developed by using Particle Flow Coding (PFC3D) to understand the process of segregation on dry density-based separators. Various important parameters were studied including bed height, frequency of vibrations, amplitude of velocity of the vibratory bed, and particle size. Furthermore, the theoretical basis of the process was studied. Forces such as, gravitational force, drag force and buoyancy force involved in the segregation phenomena were also analyzed. In each case, partition curves were developed to study the separation efficiency of the model process. The simulation results show the separation efficiency of selective mono-size spherical particles was optimum for a certain range of amplitude of velocity of the vibratory bed and bed height in specific conditions. Variation in frequency of vibrations had not affected the separation efficiency. Furthermore, the results for different size particles evidently confirmed that coarser particles have higher separation efficiency. Although several important parameters of the segregation process have been studied in the model, future work will require further investigation by incorporating parameters and conditions in the model such as, particle shape, air flow through perforated vibratory bed, rough surfaces (friction) and horizontal slope of the vibratory bed. 121
Virginia Tech
Real-Time Spatial Monitoring of Vehicle Vibration Data as a Model for TeleGeoMonitoring Systems Jeff Robidoux (ABSTRACT) This research presents the development and proof of concept of a TeleGeoMonitoring (TGM) system for spatially monitoring and analyzing, in real-time, data derived from vehicle-mounted sensors. In response to the concern for vibration related injuries experienced by equipment operators in surface mining and construction operations, the prototype TGM system focuses on spatially monitoring vehicle vibration in real-time. The TGM vibration system consists of 3 components: (1) Data Acquisition Component, (2) Data Transfer Component, and (3) Data Analysis Component. A GPS receiver, laptop PC, data acquisition hardware, triaxial accelerometer, and client software make up the Data Acquisition Component. The Data Transfer Component consists of a wireless data network and a data server. The Data Analysis Component provides tools to the end user for spatially monitoring and analyzing vehicle vibration data in real-time via the web or GIS workstations. Functionality of the prototype TGM system was successfully demonstrated in both lab and field tests. The TGM vibration system presented in this research demonstrates the potential for TGM systems as a tool for research and management projects, which aim to spatially monitor and analyze data derived from mobile sensors in real-time.
Virginia Tech
1. Introduction 1.1. Background and Motivation The initial goal of this research was to address the concern of vibration related injuries experienced by equipment operators in construction and surface mining operations by developing a TeleGeoMonitoring (TGM) system to spatially monitor vehicle vibration data in real-time. As development of the TGM system progressed, the goals of the research expanded. In addition to the focus on equipment vibration, the goals of this research evolved to include the development and demonstration of the TGM vibration system as a prototype model for a TGM system used to monitor and analyze any type of vehicle-mounted or mobile sensors in real-time. For example, the TGM system presented in this research could be used to spatially monitor and analyze tire pressure, dust, noise, speed, etc., or any combination thereof by adding the proper sensors and modifying the client software. The remainder of this subchapter discusses motivations concerning vibration related injuries (whole body vibration and jarring and jolting) as well as the potential for highly customizable TGM systems for real-time spatial monitoring, management, and analysis of sensor-based data. Whole body vibration (WBV) and jarring and jolting (JAJ) are significant sources of injury for mobile equipment operators in surface and underground mining operations (Biggs and Miller 2000; Cross and Walters 1994; Wiehagen et al. 2001). Literature relevant to this thesis discusses the effects of vibration from several different points of view. Retrospective statistical studies identify JAJ and WBV as major sources of operator injury and express the need to remedy this problem (Cross and Walters 1994; 1
Virginia Tech
Wiehagen et al. 2001). Studies from biological mechanics disciplines have used human body models to study the effects of WBV and JAJ (Huang and Huston 2001). Others have introduced and tested engineering mechanisms to reduce the adverse effects of vibration on equipment operators (Mayton et al. 1997; Mayton et al. 1998). Vibration studies at surface mines also find motivation from an economic standpoint. Thomson, Visser, et al. demonstrate the potential savings in operating and maintenance costs, which can be achieved by using a real-time management system that relies on road vibration signature analysis (Thompson et al. 2003). Regardless of the approach to vibration studies, research indicates that vibration related injuries are a significant risk for mobile equipment operators. As long as a risk exists, there is room for improvement in the systems and mechanisms available to reduce operator injury resulting from vibration. GIS technology is used widely in other industries, but is under-utilized in the mining industry (Dillon and Blackwell 2003). Advances in mobile computing, GIS, and wireless networks have made real-time GIS a feasible, if not, logical solution for systems which aim to monitor, manage, analyze, and display spatial information in real-time. Development of customized TGM systems could allow for highly specified real-time spatial analysis for research applications, and could lead to customized overall mine management systems. Customized TGM systems reduce reliance on proprietary systems and can provide tailored solutions using readily available software and hardware. At this time, there has been no development of a customizable real-time GIS for monitoring, analyzing, and displaying vehicle mounted sensors in surface mines. 2
Virginia Tech
A system used to monitor and display vehicle-mounted accelerometer data in real-time was developed and tested for this research to demonstrate the use of real-time GIS as a tool for further study and reduction of vibration related injuries. The system serves as a prototype model for real-time monitoring, analyzing, and displaying of any data derived from mobile sensors. Additional sensors can be added to incorporate all data, which can be associated with a geographic location (i.e. dust, noise, tire pressure, etc.). 1.2. GIS Overview The history of Geographic Information Systems (GIS) dates back to the mid 1960s when the Canadian government developed the Canada Geographic Information System to explore Canada’s land resources and uses (Longley et al. 2001). Longley and Goodchild et al. note that in 2000 there were over one million core GIS users and possibly five million casual users. Definitions of a GIS vary among users and disciplines. GIS, under this research, is defined as a computer or group of networked computers used to store, analyze, and display data associated with spatial coordinates. Real-time GIS will refer to a GIS, which is automatically updated via external hardware or networked computers. In this research, the external hardware and networked computers are mobile. The term real- time-GIS is often used in conjunction with the terms TeleGeoProcessing (TGP) and TeleGeoMonitoring (TGM). As outlined by Laurini, Servigne, et al. (2001) TGP can be taken as the integration of GIS and telecommunication systems. TGM extends TGP with the addition of a positioning system such as GPS. 3
Virginia Tech
Table 1.1 summarizes characteristics of TGM, TGP, and GIS systems. This nomenclature referring to the union of GIS and telecommunications is used throughout this thesis and the system described herein can be considered as a TGM system used to monitor vehicle derived vibration data. Table 1.1 Layers of organization between TeleGeoProcessing and TeleGeoMonitoring (Laurini et al. 2001) 3. TeleGeoMonitoring Decision Support System & RTDSS Real Time Database Real Time Acquisition Process Real Time Transmission Process Real Time Visualization/Mapping 2. TeleGeoProcessing Telecommunications Federated databases System engineering 1. GIS Information system Spatial databases Mapping Regarding the term real-time, it should be noted that with systems sending data via wireless networks there should be a measurable lag between transmittal and receive time. This should be particularly true with systems, which require significant data processing prior to transmittal. In this paper the term real-time will refer to transactions taking up to several seconds. The term near real-time will be used to describe systems taking several minutes for data transfer. 1.3. GPS Overview The Global Positioning System is a navigation system based on satellite radio signals, which can provide a four-dimensional navigation solution (X, Y, Z, & Time). 4
Virginia Tech
An in-depth technical description of the GPS is beyond the scope of this paper, but a note regarding the accuracy of the navigation solution is included as it relates to the future of this research project. Each GPS satellite transmits information on two frequencies called L1 and L2, which are modulated with pseudorandom noise (PRN) codes unique to each satellite. One of the PRN codes called the Coarse Acquisition (C/A) code modulates only the L1 frequency, where as the PRN code called the Precise (P) code modulates both the L1 and L2 frequencies. However, the P code is encrypted by the Department of Defense (DoD) so that civilian users may not use it. This translates into a navigation solution with a positional error of around 10 meters for civilian users as opposed to 0.5 centimeters if the P code were used on both L1 and L2. The goal of this research is to prove the concept of using a TGM system to monitor vehicle data derived from mobile sensors. High GPS accuracy is unnecessary at this point in the research. However, for the future extensions of this research accurate GPS location data may be crucial. To significantly increase the navigation solution accuracy Differential GPS (DGPS) or Real-Time Kinematic GPS (RTK-GPS) can be used. Both systems rely on a stationary receiver of known location, which broadcasts timing corrections to mobile receivers. Sub-meter accuracy can be achieved with DGPS and centimeter accuracy can be achieved with RTK-GPS. Refer to Chapter 5 for GPS related recommendations for the future of the TGM system. 1.4. Wireless Network Overview The origin of wireless computer networking can be traced back to 1968, when development of the ALOHANET began at the University of Hawaii (Abramson 1985). 7
Virginia Tech
Abramson’s ALOHA system was developed to share computer information between the university’s main campus and satellite campuses located on other islands using radio communications. After ALOHANET, wireless technology continued to improve for both voice and data communication. Currently, there are a significant number of wireless standards. However, the TGM system in this research was tested on only 3 wireless network types. Two specifications of the 802.11 standard were used in the lab setting and a Code-Division Multiple Access (CDMA) network was used in the field tests of the TGM system. 802.11, developed by a member group of the Institute of Electrical and Electronics Engineers (IEEE), refers to a group of specifications for wireless local area networks (IEEE 2005). The two specifications used by the TGM system described in this thesis are 802.11b and 802.11g. Both 802.11b and 802.11g operate in the 2.4 MHz band and can operate at speeds of up to 11 and 54 Mbps, respectively (Gast 2002). Hardware (cards, routers, repeaters) to setup an 802.11 network are readily available and easy to setup. CDMA is a digital communication technique with U.S. military roots dating back to the 1940s (CDG 2005). It was not until the early 1990s that CDMA cellular techniques were used for civilian communication. Currently, the majority of CDMA use is for voice communication, however, CDMA as a means of data transfer is increasing. Several major communications companies in the U.S. and abroad offer access to CDMA networks via commonly available cell phones, modems, and computer devices. 8
Virginia Tech
1.5. Research Objectives and Approach The goal of this research is to demonstrate the use of GIS, GPS, and wireless networks to monitor, analyze, and display data from vehicle mounted sensors in real-time. The prototype system developed for this research focuses on monitoring vibration experienced by vehicle operators. In addition to developing a tool for study of vehicle vibration as it relates to operator safety, this research aims to demonstrate the potential of TGM systems as customizable tools for a variety of surface mine applications not necessarily related to vibration. The prototype system was divided into three components: (1) Data Acquisition Component, (2) Data Transfer Component, and (3) Data Analysis Component. After development of the system, it was tested both in a lab setting and in a mobile environment. The goal of testing the system was to verify that GPS and vibration data were successfully acquired by the mobile client, sent to the server, and properly displayed via a web interface. Although network lag from the client to the server was either unnoticeable or reasonably small it should be noted that data transfer time was not considered as a parameter for testing the system. It should also be noted that the signal to noise ratio for the vibration data acquired from the accelerometer needs improvement if the TGM system is to be used outside the testing environment. Again, this research aimed to develop a TGM system for spatially monitoring vibration data derived from a vehicle-mounted accelerometer, and to verify the system’s functionality. Suggestions regarding the signal to noise ratio and other recommendations for future work are included in Chapter 5. 9
Virginia Tech
2. Literature Review Literatures discussed in this chapter reflect a wide range of disciplines and most do not explicitly discuss real-time vibration monitoring and surface mining equipment. However, there are many similarities between the systems discussed in the included literature and the prototype system presented in this thesis. The research discussed in Chapter 2.1 involves systems, which monitor, track, or analyze spatial data from either static or dynamic sources in real-time. Chapter 2.2 discusses proprietary systems relevant to this research. Research related to vibration is included in Chapter 2.3. 2.1. Real-Time TGM Systems 2.1.1. Emergency Response and Management Systems Integrating GIS, GPS, and GSM (a specification for cellular communication used worldwide) technologies Derekenaris, Garofalkis, et al. (2001) discuss a comprehensive ambulance management system, which was proposed as a solution to the emergency and ambulance management problem in the prefecture of Attica in Greece. The system was designed to replace ambulance routing and districting based on paper maps and personal experience. The design allows for GPS derived ambulance location, sensor data, and voice messages to be sent via the GSM network to a communication server. Data concerning previous incidents, hospital capabilities, available equipment and expertise for each ambulance, and regularly updated traffic information would be available to the GIS in a database server to allow for criteria based decisions. Ultimately, the ambulance management system will track and display ambulance locations, assign ambulances to certain areas, find incident locations based on addresses given during an emergency calls, 10
Virginia Tech
particular interest to the future of this research where accurate, but small packets of vibration information are crucial. The “G3” surveillance system was proposed by Lin et al. (2003) to track vehicles and low flying aircraft on a digital map in real-time. The G3 system makes use of GPS, GIS, and General Packet Radio Service (GPRS), which is a wireless data communication technique based on GSM technology. The client part of the system relies on custom hardware and software to receive GPS messages and to relay them to the server via a GPRS network. Testing of the G3 system included verification of transmitted data for different transmission periods (3 to 120 seconds) and transfer time tests (average: 750 milliseconds). In an effort to reduce haul truck run-over accidents and dumpsite-edge or ramp related roll-overs Nieto and Dagdelen (2003) developed and tested “VirtualMine”. VirtualMine is a server independent system based on GPS, wireless networks, and virtual boundaries defined by vehicle positions and edges. Running on an in-cab tablet PC VirtualMine provides a three dimensional GUI based on the Virtual Reality Modeling Language (VRML). The vehicle operator can monitor his or her position in real-time in relation to other vehicles equipped with VirtualMine. Nieto and Dagdelen envision an automatic warning system based on the overlapping of vehicle “safety spheres”, which describe a user defined boundary around a vehicle. In addition to proximity warning, VirtualMine aims to allow for on-demand contour modeling by making use of logged vehicle positions. VirtualMine was tested in a survey field and an operational surface mine with regards to GPS accuracy, proximity, and precision. 14
Virginia Tech
Although map delivery and personal tracking systems may have different motivations and application goals when compared to the TGM system presented in this paper, they both rely on similar technology and processes. Casademont, Lopez-Aguilera, et al. (2004) present a system by which sections of maps can be wirelessly downloaded on a contingency basis to Personal Digital Assistants (PDA). Using PDAs equipped with GPS receivers users can track their location in real-time. This client-server system was tested successfully using several different wireless protocols. Of particular interest to the future of the prototype system in this paper is the way Casademont, Lopez-Aguilera, et al. handle differential GPS corrections. If a more accurate GPS-derived location is required a user can send a request via Short Message Service (SMS) to the server and receive an SMS message with the appropriate GPS correction. A system proposed by Karimi and Krishnamirthy (2001) focusing on the use of GIS and GPS for real-time routing in Mobile Ad hoc Networks (MANET) is included here for its suggested use of a real-time GIS and MANETs. A MANET is a dynamically self- organizing wireless network, which lacks fixed infrastructure and is comprised of mobile nodes (Conti 2003; Toh 2002). The network will repair itself when nodes move out of range or fail and can therefore be considered as self-healing. Karimi and Krishnamirthy outline the potential use of a GIS as part of a packet routing strategy, which considers node location, direction of movement, terrain and other factors. Integration of a GIS with a MANET can be described as a hybrid MANET because it relies on some fixed infrastructure (i.e. GIS workstation and server). Using stored terrain data and dynamic 15
Virginia Tech
Figure 2.7 Web display of the dynamic traffic map, traffic video, and information of the proposed system (Yeh et al. 2003) (Copyright Pion Ltd. 2003) Systems not concerned with real-time monitoring of remote sensor derived data may be getting further away from the scope of the TGM system described in this thesis, but still include tangible concepts. Built with a localized 802.11b network, several GPS equipped PDAs, and a central GIS server a system was used to test the feasibility of using the aforementioned technology for environmental monitoring and management (Tsou 2004). Equipped with mobile PDAs, the users surveyed a park region for changes in plant life such as new locations of invasive plants. Updated shape files were then transferred to the GIS server via the wireless network using File Transfer Protocol (FTP). Another system used to wirelessly update a GIS was used at a construction site in Tokyo Japan to map, in real-time, an underground water pipe (Arai et al. 2003). Using RTK-GPS and a data cell phone, pipe coordinates were sent to the GIS server in real-time. 19
Virginia Tech
2.2. Proprietary Mine Tracking Management Systems Caterpillar’s MineStar system includes a series of components geared towards maximization of production and profit. Components of the system provide fleet scheduling, production management, equipment health information, material and equipment tracking, mine planning, drilling management, in-cab earthmoving planning, and overall system monitoring. The Health component of the MineStar system uses a Vital Information Management System (VIMS), which acquires equipment health information and makes it available wirelessly in real-time (Caterpillar 2004). The goal of the MineStar Health system is to allow for remote analysis of equipment health and to aid in preventative maintenance (Jarosz and Finlayson 2003). Although VIMS data may be accessed via proprietary software as part of the MineStar system, it may be possible to access VIMS data independently for customized purposes as suggested by Thomson et al. (2003). Integration of original equipment manufacturers’ (OEM) VIMS with the TGM system described in this paper could be a powerful tool for customizable real-time spatial analysis and overall mine management using real-time GIS. Similar to MineStar, Wenco’s Mine Management System (MMS) uses equipment- mounted sensors to monitor and manage production in real-time (Wenco 2004). The system provides capabilities comparable to MineStar such as fleet scheduling, drilling management, etc. (Jarosz and Finlayson 2003). Again, the purpose of the production and efficiency oriented Wenco system differs from that of the TGM system in this research, but the system architecture is similar. 20
Virginia Tech
A third proprietary mine management system that includes remote monitoring of vehicle mounted sensors is Modular’s IntelliMine (Modular 2004). IntelliMine has basically the same capabilities of MineStar and Wenco’s MMS. The components of IntelliMine dealing with remote analysis of equipment health are called MineCare and Dispatch. MineCare and Dispatch allow for real-time remote monitoring and management of equipment health for preventative maintenance. 2.3. Vibration at Surface Mines The motivation for this research includes the development of a prototype system to spatially monitor vibration experienced by equipment operators in real-time as a model for TGM systems used to spatially monitor data derived from any vehicle mounted sensor. This is in response to a number of studies concerning vibration related injuries. 2.3.1. Vibration Related Injuries Serious injuries resulting from JAJ remain a significant issue for mobile equipment operators. Biggs and Miller note that over a 9 year period starting in 1986 truck drivers accounted for nearly 63 percent of the fatalities and 60 percent of the lost-time injuries for truck haulage in surface mines (Biggs and Miller 2000). They further mention that “very little research has been done related specifically to shock trauma injuries, that is, jolting and jarring injuries caused by surface mine haulage trucks, nor to the development of engineering controls to reduce such injuries”. Table 2.1 summarizes injuries by type for dozer operators based on a NIOSH study using MSHA injury data from January 1988 to December 1997 (Wiehagen et al. 2001). Jarring 21
Virginia Tech
and jolting related injuries make up for 70 percent (597 of 855) of the serious injuries and 39 percent (7 of 18) of the fatalities. Table 2.1 While operating a dozer: serious injuries and lost workdays by operator impact (fatalities in parentheses) (Wiehagen et al. 2001) Operator Impact Serious % Days Average Injuries Lost days lost1 Jolted/jarred 436 50 17,630 40.4 Jolted/jarred - struck against 142 16 5,388 37.9 Jolted/jarred - landed outside cab 26(7) 3 906 47.7 Musculoskeletal injury (MSI) 155 18 5,656 36.5 Struck by object 78 9 1,733 22.2 Burned/contact with a hot object 10(1) 1 324 36 Asphyxiated 4(4) <1 — — Drowned 3(3) <1 — — Crushed/run over by dozer 3(3) <1 — — Other 16 2 249 15.6 Total 873(18) 100 31,886 — 1For the 855 lost-time injuries A similar research by Cross and Walters (1994) examined compensation claims in the mining industry of South Wales, Australia, over the course of 4 years. Of 28,306 claims, 8,961 related to the head, back, and neck. Of the 8,961 claims 11% (986) were attributed to vehicular jarring. 53% of the vehicular jarring injuries described in the claims involved underground transporters and shuttle cars. Vehicular jarring injuries while operating surface loaders, dozers, and dump trucks made up 29% of the claims. Figure 2.8 shows a comparison of causes of head, neck, and back injuries for equipment operators as indicated by the compensation claims analyzed by Cross and Walters. Jarring makes up for 36 percent of the injuries. 22
Virginia Tech
EEqquuiippmmeenntt OOppeerraattoorrss SSlliippss aanndd FFaallllss VVeehhiiccuullaarr JJaarrrriinngg 2244%% 3366%% 112244 119911 119911 OOvveerreexxeerrttiioonn 3366%% 1199 OOtthheerr 44%% Figure 2.8 All head, neck and back injuries (Chart was recreated for inclusion here) (Cross and Walters 1994) (Copyright Elsevier 1994) In addition to retrospective analysis of injury claims relating to JAJ, studies have been conducted to investigate such injuries from mechanical, simulative, and responsive standpoints. Wilder et al. (1996) reference many articles concerning whole body vibration and jolting and go on to discuss the mechanics of the lower back and suggest possible solutions to vibration and jolt injuries. The proposed solutions include operator seat and cab modifications to isolate the operator from or to reduce his or her exposure to jolting and vibration. Using a 17 member human-body model, Huang and Huston (2001) studied the model’s response to WBV and JAJ. They compare their findings to experimental data and conclude that multi-member models can be effectively used for studying WBV and JAJ. It is worth noting that when compared to accident victim simulation data Huang and Huston mention that “there is relatively little data available for WBV [whole body vibration] and JAJ [jolting and jarring]”. A study by Mayton et al. (1998) identified jarring and jolting as a serious problem for low-coal shuttle car 23
Virginia Tech
3. System Design and Development 3.1. System Overview The TGM system developed for this research can be divided into 3 components as shown in Figure 3.1: (1) Data Acquisition Component, (2) Data Transfer Component, and (3) Data Analysis Component. The Data Acquisition Component is responsible for acquiring GPS and vibration data, establishing and maintaining a connection to the server, and sending data messages to the server. Responsibilities of the Data Transfer Component include managing incoming data messages and delivering the data as requested by the end user. The Data Analysis Component consists of the tools provided to the end user to monitor and analyze the data stream in real-time. A simplified flow sheet outlining the functions of each component is shown in Figure 3.2. It should be noted that the flow sheet represents the action of acquiring only one set of GPS coordinates with corresponding vibration data and sending it to the server for distribution. While running the TGM system, each component can be thought of as a separate continuous loop whose state is dictated by user commands. 25
Virginia Tech
3.2. Data Acquisition Component The Data Acquisition Component consists of a mobile PC, data acquisition hardware, GPS receiver, accelerometer, wireless networking hardware, and custom software. Details on the component hardware are shown in Table 3.1. Table 3.1 Hardware summary for Data Acquisition Component Hardware Details Laptop Sony Vaio (PCG-GR270P) Processor Intel Pentium III (750 MHz) RAM 512 MB Operating System Windows XP Pro GPS Receiver Rayming TripNav (TN-200) Interface USB (Virtual COM) Data Acquisition National Instruments DAQCard-6024E Channels 16 single-ended; 8 differential Sample Rate 200 kS/s Interface PCMCIA Connector Block BNC Block (NI BNC-2110) Accelerometer PCB Triaxial ICP Accelerometer (356B40) Sensitivity (± 10%) 100 mV/g Measurement Range ± 10 g pk Network Adapters T1 Integrated Ethernet Adapter 802.11 b, g PCMCIA Card (Lynksys WPC54GS) CDMA (Sprint Network) Samsung Cell (VGA-1000) Figure 3.3 shows connection types used in the Data Acquisition Component and Figure 3.4 shows a picture of the hardware. The laptop, powered through a cigarette lighter adapter, runs the client software, which manages all incoming and outgoing data. Towards the bottom right of Figure 3.4 is the connector block used for connecting to the accelerometer, which is the disc-like part located above the box. The connector block is 27
Virginia Tech
types and stores only the desired information. In this case, the Recommended Minimum Specific GPS/TRANSIT Data (RMC) data-type is used because the TGM system requires only location and time information from the GPS receiver. The following is an example of an RMC formatted NMEA sentence: $GPRMC,123519,A,4807.038,N,01131.000,E,022.4,084.4,230394,00 3.1,W*6A (DePriest 2005) This NMEA sentence is broken down and explained in Table 3.2. Table 3.2 Explanation of RMC data type NMEA sentence (DePriest 2005) Sentence Part Explanation RMC Recommended Minimum sentence C 123519 Fix taken at 12:35:19 UTC A Status A=active or V=Void. 4807.038,N Latitude 48 deg 07.038' N 01131.000,E Longitude 11 deg 31.000' E 022.4 Speed over the ground in knots 084.4 Track angle in degrees True 230394 Date - 23rd of March 1994 003.1,W Magnetic Variation *6A The checksum data, always begins with* After retrieving the RMC formatted string the client software parses the string into the following substrings: time of fix, status, latitude, latitude direction, longitude, longitude direction, and speed. The time of fix (hhmmss.sss) and system date are then formatted into a timestamp format (MM/DD/YYYY hh:mm:ss) to be included in the outgoing EXtensible Markup Language (XML) formatted message to the server. XML is a standard created by the World Wide Web Consortium (W3C) (2005) for describing data with user-defined tags. Latitude and longitude are received in the formats ddmm.mmmm and dddmm.mmmm, respectively. In other words, latitude and longitude are received as a string of degrees, minutes, and decimal minutes. In order to use these coordinates in the GIS environment they are converted to the format of decimal degrees (dd.dddddd). The 33
Virginia Tech
sign of the coordinates are assigned based on the latitude and longitude direction. North (N) and East (E) are positive and South (S) and West (W) are negative. For example, take the received latitude as 3713.8147 N. Since latitude is in the format ddmm.mmmm the first 2 characters are taken as degrees. Reading the remainder of the string from an offset of 2 gives 13.8147 (mm.mmmm). The remaining string, 13.8147, is then divided by 60 to yield decimal degrees (13.8147/60 = .230245). Degrees are then added to decimal degrees to give 37.230245. Since the direction is N the sign is positive. This procedure is also used to convert the longitude coordinate with the exception that the first 3 characters are taken as the degrees (dddmm.mmmm) instead of the first 2. After reformatting the incoming latitude and longitude coordinates they are included in the current outgoing XML message in the format longitude, latitude, corresponding to x,y. The accelerometer used to measure vibration is triaxial, where each direction of motion (x, y, and z) has its own channel over which an analogue voltage is sent. However, the prototype system in this research is concerned only with vertical motion (z). Using the pull-down menus on the client GUI the user specifies the channel corresponding to motion in the z direction in addition to minimum and maximum values, sample rate, and number of samples. The sample rate refers to the rate at which the data acquisition card samples the analog input signal. Samples to Read dictates the number of samples to read from the buffer and affects the smoothness of the waveform representation of the vibration data. The signal from the accelerometer is continuously sampled so the buffer is continually refreshed. Every 2 to 3 seconds an array of vibration data from the buffer is used to calculate minimum, maximum, and average vibration. Although the client GUI 34
Virginia Tech
displays all current vibration calculations, only the maximum vibration value is included in the XML message to the server. Although the vibration section of the Data Acquisition Component stands to improve in the future of this research, it is adequate for the purpose of proving the concept of the TGM vibration system. When GPS coordinates, vibration maximum, and timestamp for a given moment are included in the XML message the message is sent to the server. An example of an outgoing XML message is as follows: <MESSAGE ID=Client_V1> <FIELD>Vehicle_1</FIELD> <FIELD>-80.421873,37.230253</FIELD> <FIELD>1/31/2005 2:39:27 PM</FIELD> <FIELD>0.8564123</FIELD> </MESSAGE> In the first line Client_V1 tells the server to which message definition this message conforms. Vehicle_1 identifies the vehicle running the client software. If the Data Acquisition System were installed on additional vehicles the second line would be changed to Vehicle_2, Vehicle_3, and so on for each vehicle. No other adjustments to any part of the TGM system would be required to accommodate additional vehicles. The third, fourth, and fifth lines are vehicle location, timestamp, and vibration, respectively. The final line, </MESSAGE>, marks the end of the XML message. More on the format of the XML message and the server-side message definition are discussed in Chapter 3.3. 35
Virginia Tech
3.3. Data Transfer Component The Data Transfer Component of the TGM system includes a server and network. Details of the Data Transfer Component’s hardware are shown in Table 3.3. Table 3.3 Hardware Summary for Data Transfer Component Hardware Details Server Dell GX110 Processor Pentium III 600 MHz RAM 512 MB Operating System Windows 2000 Server Software ESRI Tracking Server Pre-Release* Servlet Container Apache Tomcat 5.0 Java Platform Sun J2SE 1.4.2 HTTP Server Apache HTTP Server 2.0.52 Lab Setting Network #1 T1 Interface Integrated Ethernet Card Router Linksys WRT54GS Lab Setting Network #2 802.11b Routers/Repeaters Virginia Tech’s Integrated Wireless Net Lab Setting Network #3 802.11g Router Linksys WRT54GS Field Setting Network #1 Sprint CDMA Network Phone Samsung Cell (VGA-1000) *(Tracking Server is a product of Northrop Grumman Corp. Copyright 2005. All rights reserved.) 3.3.1. Server Section ESRI’s Tracking Server Pre-Release software is the main part of the Data Transfer Component (Tracking Server is a product of Northrop Grumman Corp. Copyright 2005. All rights reserved). It is a tool for accepting real-time data and transferring them to web clients or GIS workstations running Tracking Analyst (ESRI 2005). As it functions in the 36
Virginia Tech
Data Transfer Component, Tracking Server accepts XML messages from the client software in the Data Acquisition Component. The incoming XML messages are compared to user-defined message definitions, which indicate the data type for each field of a message. In other words, the message definition tells TS what to expect in each field of an incoming XML message that conforms to the message definition. Multiple message definitions can be used to allow different formats for incoming messages. However, the TGM system in this research uses only one message definition. After associating an incoming XML message with a message definition, conditional actions may be applied to the data in the message. For example, if the value of a certain field in a single message meets particular criteria than the message can be tagged in order to alter the symbology associated with the message or filtered for exclusion. Once applicable actions are applied to an incoming message the data may be distributed to the Tracking Server Web Viewer or Tracking Analyst. Figure 3.9 shows a simplified flow sheet for the processes in the Data transfer Component. 37
Virginia Tech
After enabling the ClientV1 service, Tracking Server was capable of distributing incoming data to a Tracking Viewer website or Tracking Analyst. To allow for condition-based symbology (i.e. flashing red dot) on the Tracking Viewer website a Tag action was applied to the Client_V1 message definition. A Tag action applies a textual tag to a data message if a certain condition is met. The tag can be used to alter the symbology corresponding to the tagged message when viewed via the Tracking Viewer website. Tags are used by the TGM system to flash a large red dot in place of the normal symbol when a vibration value greater than 4 is received in a message. To create a tag action the “New Action” button was pressed in the “Actions” tab in Tracking Server Manager. A name was given to the action and it was defined as a Tag action. After naming the action the Tag Action Parameters window opens as shown in Figure 3.14. The Tag Text was specified as “HighVibration” and the action was set to be triggered by an attribute query. 43
Virginia Tech
requests originating from an external source first reach the router and are then forwarded to the server. 3.4. Data Analysis Component The Data Analysis Component provides tools to the end user to monitor and analyze data from the Data Acquisition Component in real-time. The two methods for accessing data distributed by Tracking Server are the Tracking Viewer website and Tracking Analyst on a GIS workstation. After completing the server setup as outlined in Chapter 3.3.1, no further server-side modifications are necessary for using Tracking Analyst. After making a connection to Tracking Server via Tracking Analyst a real-time layer representing data from the Data Acquisition Component may be added to the data frame in the same fashion as adding a static layer. This is demonstrated in Chapter 4. The remainder of this chapter will focus on the development of the Tracking Viewer website. The tools used to build the Tracking Viewer website are ESRI’s ArcIMS, Tracking Server Author, and Tracking Server Designer. ArcIMS is used to distribute maps and GIS data via the web (ESRI 2004a). For this research, it was used as an image service to provide the background map of Blacksburg for the Tracking Viewer website. Tracking Server Author and Designer are wizard-style tools included with Tracking Server for building the Tracking Viewer website. The first step was to create the map for the image service using ArcIMS. As shown in Figure 3.16, shape files of Blacksburg were used to build the map for the Blacksburg image service in ArcIMS Author. The process of adding data and adjusting symbology 46
Virginia Tech
second screen connects to a user-specified ArcIMS server and lists the available image services. In the case of this research, the Blacksburg image service was hosted by a dedicated IMS server located at Virginia Tech’s Center for Geospatial Information Technology (CGIT). After selecting the Blacksburg image service, the location of the tracking symbology file is specified in the third window. Appearance settings (title display, logo, color, fonts, etc.) for the Tracking Viewer website are selected in window 4. In the fifth window settings are made to enable the end user to toggle layers on and off. The initial extent of the Tracking Viewer website is set to match that of the ArcIMS image service, as shown in window 6. Window 7 enables common GIS tools (zoom, pan, information, etc.) for the website. Finally, the Tracking Server Designer process is completed by selecting “Finish” in window 8. After building the Tracking Viewer website, it was necessary to deploy the site in order to make it available on the web. 50
Virginia Tech
5. Conclusions and Recommendations 5.1. Conclusions A TeleGeoMonitoring (TGM) system to spatially monitor and analyze vehicle derived vibration data in real-time was developed. This system serves as a prototype model for a customizable TGM system capable of monitoring and analyzing data derived from any type of vehicle-mounted or mobile sensor in real-time. Although it is built with proprietary hardware and software, the architecture of the TGM vibration system is open for modification or custom tailored solutions, which aim to spatially monitor and analyze remotely derived data in real-time. The TGM vibration system was demonstrated by verifying that data were consistently acquired in a mobile environment, successfully sent to the Tracking Server, and properly displayed on the Tracking Viewer website. This research achieved its goal of proving the concept of a TGM system to spatially monitor vehicle vibration in real-time. However, many improvements and extensions are envisioned for the future of the TGM system and its related research. 5.2. Recommendations Immediate recommendations (enumerated below) include improvements to the TGM vibration system. The (1) introduction of RTK-GPS or DGPS corrections could significantly improve the accuracy of the vehicle’s location coordinates. A feasible option to improve GPS accuracy is to send corrections from Tracking Server to the client software via a custom data link. The (2) signal to noise ratio of the vibration data could improve with the application of appropriate amplifiers and filters to the Data Acquisition Component. Also, as mentioned by Rouillard (2002), the TGM system could benefit 63
Virginia Tech
from (3) data compression with minimal loss. Regarding the client GUI, (4) several parts could be removed to improve performance. Display of the dynamic vibration waveform is both processor intensive and unnecessary from a functionality point of view. Simplification of the client software display could make possible the use of small tablet PCs or PDAs instead of the laptop, which is currently used. Ultimately, it is envisioned that the TGM vibration system developed in this research could evolve into a powerful research tool or comprehensive mine management solution. As a research tool, variations of the TGM system could benefit health and safety studies, equipment diagnostic studies and tests, tire research, haul road studies, etc. Bringing real-time data layers into an already powerful GIS environment could aide mine management and planning by allowing for decisions based on the mine status at that moment. If the TGM system were used at a surface mine it is likely that an alternative means of transferring data would have to be considered. Surface mines are often located remotely and may contain obstructed views of communication satellites. A Mobile Ad Hoc Network (MANET) may be a suitable option for establishing a mine-wide wireless data network. As mentioned earlier, a MANET lacks fixed infrastructure and is comprised of mobile nodes (i.e. vehicles). Data may hop from one node to the next until it reaches its destination. A MANET would remove the need for an outside data network (i.e. CDMA) and would reduce the amount of equipment required to establish a network. Figure 5.1 illustrates a variation of the TGM system as envisioned at a surface mine operation. The dashed lines in Figure 5.1 represent GPS data and the dotted lines represent a MANET. 64
Virginia Tech
Andrew Storey Design Optimization of Safety Benches for Surface Quarries through Rockfall Testing and Evaluation Andrew Wilson Storey (Abstract) The research presented in this thesis results from efforts to evaluate current design methodologies for safety benches in surface aggregate quarries. Proper bench design is important for preventing rockfall related accidents and injuries without wasting the reserves held in the benches. An in depth analysis has been performed using the results from 230 rockfall tests conducted at two surface quarries. The goal of this project is to give practitioners the tools they need for improved bench design. Principal Components and Cluster Analysis, techniques not previously applied to rockfall investigations, have been performed on the test data. The results indicate that both are valid analytical methods which show that the factors affecting the rollout distance of a rock are wall configuration, rock dimensions, and rock energy. The test results were then compared to the Ritchie Criteria, Modified Ritchie Criterion, Ryan and Pryor Criterion, Oregon Department of Transportation design charts, and RocFall computer simulations. Analysis shows that the lognormal distribution curves fitted to the test data provide an excellent yet quick design reference. The recommended design method is computer simulation using RocFall because of the ease of simulation and the site specific nature of the program. For the two quarries studied, RocFall analysis showed that 20 ft benches with a 4 ft berm will hold over 95% of rockfalls, a design supported by the field testing. Conducting site-specific rockfall testing is also recommended to obtain realistic input parameters for the simulations and to provide design justification to regulatory agencies.
Virginia Tech
ACKNOWLEDGEMENTS I would like to thank the Virginia Tech Mining and Minerals Engineering Department for helping me get to this point in my academic career. I am very grateful for all the support given to me by the department through my time at Virginia Tech. I would like to thank Dr. Erik Westman, my advisor, for all of his assistance and guidance though the course of this project as well as my academic career. I would also like to thank the other members of my graduate committee, Dr. Mario Karfakis and Dr. Skip Watts, for their expertise and help. I am thankful to have had the full support of this committee. I would also like to recognize and thank Luck Stone Corporation for their support of this project from a research standpoint. Specifically, I would like to recognize Matt Schiefer, Travis Chewning, Joe Carnahan, and Adam Parr for their time, suggestions, and assistance throughout this project. Finally, thank you to my family and friends who have enabled me to reach this point. Without their encouragement, I would not be in the position I am today. iii
Virginia Tech
Andrew W. Storey Introduction 1 Introduction Rockfalls are a constant problem in surface mining. Natural conditions combined with mining methods serve to create a never ending problem. Blasting, combined with the inherent rock fractures and weathering, creates kinematically free blocks of rock. When the force envelope around a block is altered by pore water pressure from groundwater, ice wedging from freezes, roots from vegetation, or other phenomena, the block may fall out of the wall posing a danger to people and equipment (Hoek, Analysis of Rockfall Hazards, 2007). When designing quarries, engineers account for these rockfalls. Safety benches are horizontal steps left in the wall of a surface mine to stop rockfalls from traveling further down the slope and endangering personnel or equipment on the working bench (Figure 1.1). Figure 1.1: Surface Mine Cross Section Also called catch benches, these terraces are left at regular intervals. Unfortunately, the factors that come into play in the design of the bench width are numerous, making for a difficult problem. Geology, blasting, safety, pit design, hydrology, acceptable risk, reserve estimation, cost, and many other factors impact the chosen bench width. The complicated problem has therefore led to minimally engineered designs. While rules of thumb and historically safe 1
Virginia Tech
Andrew W. Storey Introduction designs may be useful, relying on them can lead to unsafe conditions in certain cases or being too conservative in others. Aggregate producers do not want to injure people but cannot afford to leave salable material in an overly conservative catch bench. With the goals of improving safety and mining efficiency, this report aims to help producers by combining knowledge and research related to bench design in aggregate quarries and contribute to the creation a more accurate guideline for safety bench design. A significant part of this work is field testing which has been performed in two aggregate quarries owned by Luck Stone Corp, an aggregate producer headquartered in Richmond, Virginia. The testing procedure consisted of rolling rocks of various sizes and shapes over the crest of typical quarry highwalls as well as highwalls with pronounced launch features. Launch features are irregularities in the slope which project the falling rock outward from the wall on impact. When dropped, the rocks rolled/fell down the slope, mimicking natural rockfalls. After the rocks reached the toe of the slope and come to rest, characteristics of the fall and the test conditions were recorded for each rock. The collected data includes: geology, impact distance, rollout distance, rock size (dimensions), wall height, and wall angle. The data collected through the field work has been analyzed with previously unutilized methods in order to understand the major factors influencing a rockfall. Principal Components Analysis and Cluster Analysis, both spatial data analysis techniques, have been performed to extract the underlying influences on how far a rock will fall and roll. The author has not been able to find any examples of these methods being used for the analysis of rockfalls; thus, these techniques will be evaluated as a means for rockfall analysis. Principal Components Analysis and Cluster Analysis are being used with the goals of expanding the tools available to researchers and corroborating observations with rigorous analytical methods. The results have also been compared to current safety bench design practices with the goal of validating or disproving the current methods as appropriate quarry design techniques. The Ritchie Criteria, the Modified Ritchie Criterion, the Ryan and Pryor Criterion, the Oregon Department of Transportation’s (ODOT) design guide, and the geotechnical software program RocFall have all been compared to the project test results with the goal of determining the most accurate method for calculating safety bench width. This thesis presents a detailed discussion of rockfall issues. The research efforts of previous and current investigators are reviewed, and a detailed description of the field work, 2
Virginia Tech
Andrew W. Storey Literature Review 2 Literature Review Rockfall is an issue which has affected the mining industry since people first started excavating the ground. Once exposed, many elements begin to work on the rock which will eventually lead to a fall. Slope designers have dealt with this problem through a variety of classification and remediation strategies. These issues will be reviewed in order to provide a foundation for the discussion of the research presented in this paper. 2.1 Safety Most importantly, rockfalls are a safety concern. Between 2005-2009, 18 miners were killed in the United States while working in surface Metal/Non-Metal (M/NM) mines by material falling or sliding from a highwall, which accounts for 17.0% of the surface M/NM fatalities over that time period (Mine Safety and Health Administration). Unfortunately this problem occurs on a global scale. For example, the Spanish Association of Aggregate Producers, ANEFA, report that more than 20% of quarry incidents are caused by rockfalls, which is the highest cause of fatalities (Alejano, Pons, Bastante, Alonso, & Stockhausen, 2007). The cause of even a single injury or fatality is worthy of study, but the increased danger of rockfalls clearly warrants investigation. As a result, numerous scholars have undertaken research projects which aim to shed light on issue as a whole as well as the specific issues related to rockfalls. 2.2 Rockfall Causation Two factors are prerequisites for a rockfall: a kinematically free block of rock and a change in the forces on that block. Before a rockfall can occur, the block must be free to release from the rest of the rock mass. Prior to mining, natural discontinuities and joints play the initial role in creating these blocks (Giani, 1992; Rossmanith & Uenishi, 1997; Hoek, Rippere, & Stacey, 2000). During operation, blasting extends current discontinuities and creates new fractures in the rock mass, which generates more potentially unstable blocks (Zou & Wu, 2001; Hagan & Bulow, 2000). For an uncontrolled production blast with a free face, the in situ rock may be damaged a distance of 1-1.5 times the face height into the rock mass, if not more (Hoek & Karzulovic, 2000). In addition, the rock mass may be subjected to more than one blast due to the multiple benches used in a surface mine. On the positive side, using controlled blasting 4
Virginia Tech
Andrew W. Storey Literature Review techniques will reduce the amount of blast induced cracking which will improve the stability of a slope (Harries, 1982). Even with blocks that are free to move, rockfalls will not occur unless the forces acting on the block change in such a way as to cause instability (Call & Savely, 1990; Hoek, 2007). These triggers are seemingly endless in the mining environment. The most important cause is water, and multiple studies have found that the majority of failures can be attributed to the presence of water (Pantelidis, 2009). More specifically, Hoek (2007) presents numerous causes. Increased pore water pressure from rain, runoff, or groundwater can force a block from the wall. Vegetation growing in the fractures in the wall can push blocks out as well as leverage blocks when swaying in the wind. Over time, freezing and thawing slowly wedge discontinuities open and force blocks loose. Physical and chemical weathering leads to weakening of the rock, thereby creating instability. Loss of the supporting material below a block can cause a rockfall, as well. This cause has led to the study of key blocks, which are blocks which allow movement of other blocks when removed from the wall. Identifying and securing these key blocks is important for wall stability. Rossmanith and Uenishi (1997) add cyclic temperature change to the list of causes, as supported by drillers and blasters’ observations of numerous rockfalls in the early hours of the morning. The cycles of elongation and contraction due to the daily temperature change result in frictional slip and rock movement. Hoek (2007) also suggests that rockfalls can be caused by blasting, construction, and other mechanical means with a potential for occurrence of one to two orders of magnitude higher than the aforementioned triggers. This statement means that miners must be extra careful because a mine meets Hoek’s designation for increased chance of rockfall. With all of these sources of rockfall, mine designers are left to try to prevent rocks from falling or stop falling rocks from endangering personnel and equipment. 2.3 Slope Classification The first step many practitioners take when designing a new slope or improving a current one is to classify or rate the slope based on instability. Many classification systems have been developed for or applied to slope evaluation, and a good system must be simple to understand, functional, based on easily obtained measurements, and exact enough for the application 5
Virginia Tech
Andrew W. Storey Literature Review (Wittreich, 1987). Numerous systems have been developed over the years for use in a variety of environments. Some of the first classifications were developed by Terzaghi and Deere et al. In 1946, Terzaghi developed one of the first rock classification systems, the Rock Load Theory, which is mainly used for tunneling (Bhawani & Goel, 1999). Deere et al (1967) proposed the Rock Quality Designation index (RQD) as a means of assessing rock mass quality from core samples as part of a four step design template for structures in rock (Hoek, 2007). RQD has become a very basic measurement parameter and has been included in many classification systems since its inception. In 1972, the Rock Structure Rating (RSR) developed by Wickham et al pioneered the use of individual quantitative ratings based on three parameters: geology, joint geometry, and groundwater/joint condition (Hoek E. , Rock Mass Classification, 2007). These three parameters are scored individually and then added together to obtain the overall RSR. This ground-breaking technique of combining ratings for different parameters has been subsequently adopted for most of the classification methods developed after 1972. Two years later in 1974, Barton, Lien, and Lunde of the Norwegian Geotechnical Institute (NGI) proposed the Rock Mass Quality System (Q) by combining RQD, four joint parameters, and a Stress Reduction Factor into one rating after analyzing over 200 case histories of tunnels and caves (Bhawani & Goel, 1999). The four joint parameters are as follows: 1) Joint Set Number: total number of joint sets 2) Joint Roughness Number 3) Joint Alteration Number 4) Joint Water Reduction Factor: a measure of pore water pressure The rating for each of these parameters is taken from tables developed from the case history analysis. The Stress Reduction Factor is also taken from a table based on rock condition and can be considered a measure of the total stress (Bhawani & Goel, 1999). Bieniawski extended Terzaghi and Wickham’s work in 1976 with the publication of the original Rock Mass Rating system (RMR) which uses 5 parameters and is well suited for tunneling (Bhawani & Goel, 1999; Hoek, Rock Mass Classification, 2007). The five parameters are: 6
Virginia Tech
Andrew W. Storey Literature Review 1) Uniaxial Compressive Strength 2) RQD 3) Discontinuity Spacing 4) Discontinuity Condition 5) Groundwater Just like the Q System, RMR uses tables for the selection of the values for each parameter. This system has also been revised over the years by Bieniawski and others as more case studies have been performed (Hoek, Rock Mass Classification, 2007). Although these classification schemes can be applied to rock slopes, most were designed for underground use. Practitioners should keep in mind that extending the use of a classification system can lead to problems (Palmstrom & Broch, 2006; Pantelidis, 2009). Luckily, more specific attention has been given to surface rock mass classification in the past thirty years, and slope designers have a number of tools from which to choose. In 1979, Bieniawski added a discontinuity orientation parameter to RMR to make the system more suitable for rock slopes (Pantelidis, 2009). Similar to RMR, Romana developed the Slope Mass Rating (SMR) in 1985 for evaluation of rock slopes (Bhawani & Goel, 1999). The SMR includes three parameters accounting for joints and joint/slope interaction as well as a fourth parameter, an adjustment for excavation method (Bhawani & Goel, 1999). Another example is Rock Mass Number, which is an attempt to improve RMR. The Rock Mass Number system simply removes the Stress Reduction Factor but is the same as RMR in every other way. Removing the stress parameter eliminates the error associated with selecting that factor improving the overall rating. Palmstrom pulled the best from 15 rating systems in 1995 to develop the Rock Mass index (RMi) (Bhawani & Goel, 1999). RMi covers a variety of conditions and uses the following parameters (Palmstrom, 2010): 1) Uniaxial Compressive Strength 2) Joint Roughness 3) Joint Alteration 4) Joint Size 5) Joint Spacing 7
Virginia Tech
Andrew W. Storey Literature Review The Geologic Strength Index (GSI) is a more user friendly classification system which shies away from the quantitative nature of previous classification schemes. Developed by Hoek in 1994 and reworked into its current form by Hoek and Marinos, GSI uses an estimation of structure and surface conditions to obtain a rating (Marinos, Marinos, & Hoek, 2005). This system is more qualitative and is meant to provide an estimation of the properties of the rock without providing any recommendations for support or reinforcement, but an experienced person is required for appropriate ratings (Bhawani & Goel, 1999). Many rock slope classification methods come from highway slope design. Highway slope design and mine slope design are essentially equivalent, and classification methodology can be used interchangeably for the two environments. The Oregon Department of Transportation (ODOT) developed the Rockfall Hazard Rating System (RFHS) with the objectives of rating slope’s rockfall potential and helping determine allocation of funding (Santi, Russel, Higgins, & Spriet, 2008). RFHS has six parts which are as follows: 1) A single database for slope inventory 2) Preliminary rating into three categories A, B, or C; A is the highest rockfall potential 3) Detailed rating of the most hazardous slopes 4) Preliminary design for remediation of the worst slopes 5) Project development 6) Annual review This process helps DOT’s use resources efficiently and quickly improves the overall highway safety. The detailed rating (Step 4) is a quantitative and qualitative system using 12 parameters where a higher score indicates a more hazardous slope (Kliche, 1999). The 12 parameters used are slope height, ditch effectiveness, average vehicle risk, percent of decision sight distance, roadway width, block size/quantity per event, climate/presence of water, rockfall history, and four parameters based on joint conditions and erosion (Kliche, 1999). An adaptation of Colorado’s version of the RFHS is the Modified RFHS. Santi et al from the Colorado School of Mines rated 355 slopes in the state using five main categories: slope conditions, climatic conditions, geologic conditions, discontinuity conditions, and risk (See Table 2.I) (Santi, Russel, Higgins, & Spriet, 2008). 8
Virginia Tech
Andrew W. Storey Literature Review Table 2.I: Modified RFHS Parameters Category Parameter Height Rockfall Frequency Slope Average Slope Angle Launching Features Ditch Catchment Annual Precipitation Annual Freeze/Thaw Cycles Climate Seepage/Water Slope Aspect Sedimentary Rock: Degree of Undercutting Sedimentary Rock: Jar Slake Sedimentary Rock: Degree of Interbedding Crystalline Rock: Degree of Overhang Geology Crystalline Rock: Weathering Grade Block-in-Matrix Rock: Multiplier Block-in-Matrix Rock: Block Size Block-in-Matrix Rock: Block Shape Block Size/Volume Number of Sets Discontinuities Persistence and Orientation (only for Sed/Cr Rock) Aperture Weathering Condition Friction Sight Distance Risk (Traffic) Average Vehicle Risk Number of Accidents These five categories contain the 12 parameters from the original RFHS and more. Using univariate least squares regression, the statically significant parameters were found and ranked based on importance. Equations were then created based on the three studied rock material types: crystalline, sedimentary, and block-in-matrix (Santi, Russel, Higgins, & Spriet, 2008). Block-in-matrix is material where the erosion of the matrix controls rockfall, for example glacial till or debris flow deposits. These equations can be used to give an estimated RFHS score without wasting time and resources collecting extraneous data. The presence of launch features and slope aspect were found to be the only two parameters significant for all slopes. For 9
Virginia Tech
Andrew W. Storey Literature Review crystalline rocks, the other most significant parameters are slope height, degree of overhang, and persistence/orientation of joints (Santi, Russel, Higgins, & Spriet, 2008). Specifically developed for mining, Rockfall Risk Assessment for Quarries (ROFRAQ) is a risk assessment method developed in 2008 for use in temperate climates (Alejano, Stockhausen, Alonso, Bastante, & Ramirez Oyanguren, 2008). ROFRAQ not only rates the condition of the wall but also accounts for risk. Using a probabilistic approach assuming that an accident occurs as a result of a chain of events, ROFRAQ rates the following six categories to obtain a final ranking: 1) Slope condition 2) Failure mechanism 3) Chance of triggering event 4) Likelihood of a rockfall reaching the mine bottom 5) Likelihood of a rockfall impacting personnel or equipment 6) Rockfall History Taking the categories in order shows the chronology of a rock fall. These six ratings can then be combined to yield the chance of a rockfall causing an accident (Alejano, Stockhausen, Alonso, Bastante, & Ramirez Oyanguren, 2008). The most important rating is the likelihood of a rockfall impacting personnel or equipment. Similarly to ratings discussed previously, ROFRAQ uses a table of predetermined values to calculate the final score. 2.4 Slope Design Once the slope(s) in a mine have been rated using an appropriate classification scheme, a practitioner can begin to create a slope design which balances safety, risk, and economics. Many slope stabilization methods are currently in use in mines throughout the world. These methods range from proper configuration of the slope to rock bolts and shotcrete. Hoek (2007) asserts that for protecting highways from rockfalls the most effective method is a catchment ditch at the toe of the slope. The equivalent for the mining industry would be a catch bench with a berm. As such, the design of catchment ditches in the highway industry and the catch berms in the mining industry have been extensively studied by researchers. 10
Virginia Tech
Andrew W. Storey Literature Review One of the pioneers in the study of slope catchment design is Arthur M. Ritchie of the Washington State Highway Commission. In 1963, Ritchie published design guidelines for catchment width and ditch depth based on his rockfall research (Ritchie, 1963). The guidelines were based on data from simulated rockfalls Ritchie caused by rolling rocks down various natural and manmade talus and quarry slopes (Pierson, Gullixson, & Chassie, 2001). Ritchie measured the distance the rocks landed and rolled from the toe of the slope as well as observed the rock’s motion by recording each fall on 16-mm film (Ritchie, 1963). Ritchie also tested containment systems including a ditch, mimicked by an inclined wooden platform mounted on a truck, and rock fence (Ritchie, 1963). The design guidelines published by Ritchie became the standard for highway slope design until a change in the law prevented steep ditches next to roadways. With the change in the law, Ritchie’s design guide no longer applied for highway design. With the goal of creating a new design guide and filling in the gaps of Ritchie’s work, ODOT undertook a comprehensive study of rockfalls; the results of which were published in 2001 (Pierson, Gullixson, & Chassie, 2001). The study was performed on a representative pre-split highwall in order to accurately replicate a highway slope. The comprehensive design charts are a result of over 11,250 rocks being rolled from slope heights of 40-80 ft, slope angles of 45°-90°, and catchment angles from flat to 4H:1V (Pierson, Gullixson, & Chassie, 2001). The charts are structured to help practitioners design for percent retention of rockfalls, which is an improvement from previous methods. ODOT’s design charts allow users to optimize the slope design. In addition to the ODOT study, numerous researchers have performed simulated rockfalls to better understand the design issues and parameters. Azzoni and de Freitas (1995) analyzed the results of rockfall testing in order to better understand the selection of the input parameters for rockfall computer simulation programs. Giani, Giacomini, Migliazza, and Segalini (2004) set up rockfall testing on two slopes with the goal of better understanding the motion of a rock during a fall as well as the parameters associated with a rockfall. Alejano, Pons, Bastante, Alonso, and Stockhausen (2007) simulated rockfalls using the program RocFall and used the results to create design charts giving designers a tool for safe slope design. Giacomini, Buzzi, Renard, and Giani (2008) dropped rocks in a quarry to better understand rock fragmentation during a fall, specifically focused on foliated rocks. 11
Virginia Tech
Andrew W. Storey Literature Review Many of the studies mentioned above used the results of the experimental testing to evaluate predictive methods. Furthermore, much research has gone into applying the results of highway slope research to mining (Ryan & Pryor, 2000). These predictive tools generally take one of three forms. The first and most basic is the form of an equation, usually connected to slope height, which calculates the recommended bench width. Many scholars have published equations including Call (1992) and Ryan and Prior (2000). Call’s design equation is an application of Ritchie’s recommendations applied to a mining environment, and Ryan and Pryor’s version is an attempt to make Call’s equation less conservative. Clearly, this method of bench design is very general and may not fit all situations but can be used as a starting point. The second methodology uses design charts which account for more slope configurations and often includes data on percentage of rocks retained. The ODOT design charts take this form, for example (Pierson, Gullixson, & Chassie, 2001). Other researchers have chosen this form as well, including Alejano et al (2007). Most of the design charts include data on expected percent retention of rockfalls which allows practitioners to balance the competing demands of safety and cost. The third form employs computer models to predict rockfall results. Numerous programs exist, and most allow the user to input a slope and material properties. Once the slope and materials have been entered into the program, a form of statistical analysis will be run in order to predict rockfalls. One of the first, Colorado Rockfall Simulation Program (CRSP), was developed in 1988 by Timothy J. Pfeiffer for his Master of Engineering thesis and v4.0 is currently available (Jones, Higgins, & Andrew, 2000). Published by Rocscience Inc, RocFall is another program which will simulate rockfall energy, velocity, bounce height, and endpoint based on user inputs (Rocscience Inc., 2010). A final example is STONE; a program which will assess rockfall risk on a more regional level using a digital terrain model and GIS capabilities (Guzzetti, Crosta, Detti, & Agliardi, 2002). Unfortunately, these programs are not 100% accurate; they are only as good as the input parameters and the analytical assumptions made in the calculations. For example, RocFall operates in two dimensions, but rocks obviously fall in three dimensions which results in inherent error. In addition to research devoted to improving the quality of the simulation within the programs, many papers have been published regarding the selection of the parameters for the program inputs (Giani, Giacomini, Migliazza, & Segalini, 2004; Azzoni & de Freitas, 1995). For 12
Virginia Tech
Andrew W. Storey Literature Review example, a help document published by Rocscience cites nine separate papers in addition to user feedback as guidance for RocFall input parameter determination (Rocscience Inc., 2010). The variability of material properties of a rock mass warrants testing the properties of the rock being simulated. If material testing has not been performed, the input parameters can be chosen by comparing simulation and experimental results. Matching the results of rolling rocks down an actual slope to the simulation can be achieved by adjusting the input parameters. Once the simulations match the experimental rockfalls, the programs can be used to design appropriate benches. These three forms of rockfall prediction are invaluable to a slope designer. Unfortunately, the myriad of criteria and programs can cloud the issue of bench design further. The author believes that the existence of so many methods indicates that the task of rockfall prevention must be very case specific. No one criterion will cover everything. Luckily though, past research has laid out a clear path to follow when designing a slope, and practitioners can use the most appropriate tools available to create a safe, economic slope. 13
Virginia Tech
Andrew W. Storey Testing Description 3 Testing Description The rockfall testing performed for this project consists of rolling rocks of various sizes off a standard highwall in two quarries owned by Luck Stone Corp. Site 1 is a granite quarry and Site 2 is a diabase quarry. The section of highwall used for the tests was chosen to represent an average quality wall, as determined by the foremen at each site. The site foremen deal with the walls everyday and are the most qualified people to choose an average section of wall meaning the walls were not overly smooth or rough. The chosen walls were also away from production areas to avoid interfering with operations as well as remove distractions for safety reasons. In addition, a wall profile with a pronounced launch feature was chosen at Site 1 by the author. 3.1 Experimental Testing Setup Before the actual testing began, the test setup was completed. Along the chosen wall sections 4 drop points were selected by so that the falling rocks would encounter the main features in the wall. In addition, a fifth profile was selected to encounter a major launch feature along another section of wall at Site 1. This setup led to nine total drop points: four average profiles at Site 1, one launch feature profile at Site 1, and four average profiles at Site 2. Figure 3.1, Figure 3.2, and Figure 3.3 show pictures of the walls for each of the nine profiles. Figure 3.1: Picture of Wall from Site 1, Profiles 1-4 14
Virginia Tech
Andrew W. Storey Testing Description The smallest rock had a length of 7 in, while the largest rock measured 60 in long. The selected rocks were not chosen to match certain shapes or sizes. The random aspect in the rock dimensions was desired so as to not skew the data. Figure 3.5 contains a chart showing the range of the rock lengths and widths. Figure 3.5: Rock Size Distribution Not all the rocks were measured accurately for size due to safety concerns with being too close to the highwall, but observation of the unmeasured rocks agree with the size distribution shown in Figure 3.5. In addition to collecting the test rocks, a measurement grid was painted on the floor in 5 ft intervals from the toe of the wall. 3.2 Experimental Testing Once setup was completed, the actual testing began. An excavator was used to remove a rock from the pile and gently push it off the highwall at the desired drop point. An excavator was chosen for safety, i.e. the machine sits away from the wall, and finesse, i.e. the bucket can push the rock off the wall while imparting minimal horizontal velocity which accurately mimics a natural rockfall. Before rolling a rock, a researcher would make sure everyone was away from the wall at the toe and then radio an all clear signal to the excavator operator. The operator would then roll a rock off the highwall crest. After checking the wall, a researcher would radio a hold signal, and the excavator operator would set the bucket on the ground away from the crest 17
Virginia Tech
Andrew W. Storey Testing Description Table 3.II: Testing Results Summary Result Site 1 Site 2 Total # Rocks Rolled 114 116 230 Ave Impact Distance (ft) 8.7 6.7 7.6 Max Impact Distance (ft) 23 20 23 St. Dev. Impact Distance (ft) 5.2 3.2 4.3 Ave Rollout Distance (ft) 11.5 12.2 11.8 Max Rollout Distance (ft) 36 33 36 St. Dev. Rollout Distance (ft) 5.9 6.2 6.1 The results show the similarity between the two sites. The most significant difference, two feet, is in the average impact distance. This difference can be explained by the inclusion of a separate launch feature test at Site 1, which caused larger impact distances. The impact and rollout distances can also be analyzed for distribution. The distribution of the impacts and rollouts can be seen in Figure 3.7. Figure 3.7: Impact and Rollout Frequency Chart This trend indicates that most rockfalls will impact and rollout to distances within 15 ft of the toe, and a much smaller group will fall outside of this distance. Only 5.7% of trials impacted at a 19
Virginia Tech
Andrew W. Storey Testing Description distance greater than 15 ft from the toe, and 21.1% of trials rollout out farther than 15 ft. Excluding the trials from the launch feature test at Site 1 leaves 0.5% of rocks impacting and 15.3% of rocks rolling out greater than 15 ft from the toe. The author suggests that the addition of a berm would serve as a barrier to rolling rocks further lowering the percentage of rocks which rolled past 15 ft from the toe. Such a berm was not included in the experimental testing for this research in order to obtain accurate, unhindered rollout data. Visual inspection of Figure 3.7 shows a distribution skewed to the right which equates to a lognormal distribution. Using Microsoft Excel, the lognormal equations for the impact and rollout distributions were found and can be seen in Equations 3-1 and 3-2, respectively. Equation 3-1 Equation 3-2 In both equations, x is a variable representing distance from the toe in feet. A graph of these equations can be seen in Figure 3.8 and the cumulative lognormal distributions can be seen in Figure 3.9. Figure 3.8: Individual Lognormal Distribution 20