University
stringclasses
19 values
Text
stringlengths
458
20.7k
Virginia Tech
6.3 Experimental Design The study presented in this paper examined thirty six PPRVs with varying plug thicknesses and fill level across six temperatures. Barometric pressure and ambient room temperature were also captured as covariates to ensure that these environmental conditions did not significantly affect the experiment. Relative humidity was not captured because the experiment was executed in a laboratory in which the relative humidity was kept constant at 40%. The varying PMCH fill levels were applied to the sources to determine the effect of PMCH depletion over time assuming that a failure of the plug or the shell had not occurred. In order to determine the effect of temperature on release rate, six different temperatures were applied in random time order over the course of the study. This experiment was completed in two main parts: assembling the PMCH release sources and determining the mass flow rate of each release source. The following section details the experimental parameters from preparing the PPRVs to executing the experiment. The PMCH release source components and designs will first be discussed. The main body of the release vessel is comprised of an aluminum cylinder that is 6.35 cm (2.50 in) in length with an outside diameter (OD) of 0.747 cm (0.294 in). The cylinder itself is comprised of a hollow shell with one end sealed and the other end open to atmosphere. The shell has a wall thickness of 0.0356 cm (0.0140 in) that is consistent throughout the body of the cylinder, which gives an inside length of 6.31 cm (2.49 in) and an inside diameter (ID) of 0.711 cm (0.280 in). A schematic of the aluminum shell is displayed in Figure 6.3. 118
Virginia Tech
Four PMCH fill levels, 0.100 mL, 0.275 mL, 0.425 mL, and 0.600 mL, and four plug thicknesses, 0.635 cm (0.25 in), 1.067 cm (0.42 in), 1.473 cm (0.58 in), and 1.905cm (0.75 in), were assigned according to a full factorial design. The fill level and plug thickness increments were divided equally between the two extremes. In previous preliminary studies, a fill level of 0.500 mL was used. A maximum fill level of 0.600 mL was chosen to determine if there was a significant difference in release rate with a greater amount of PMCH. The minimum fill level of 0.100 mL was constrained by the lowest accurate measurement that the chosen liquid tight syringe could achieve. The minimum and maximum plug thicknesses were chosen to represent a range that could be consistently cut and press fitted with high precision. Two duplicates were created for each unique set of factors (e.g. two PPRVs are created with 0.100 mL of PMCH and a plug thickness of 0.25 in). The only exception was made for the 0.600 mL PPRVs. Three sources were assembled instead of two for the 0.600 mL PPRVs. All 36 release sources were assembled at one time without any personnel substitutions to ensure consistency. A detailed listing of the PPRVs can be found in Table 6.1. 120
Virginia Tech
The PPRVs were then randomly assigned to one of two 500 mL borosilicate beakers filled with sand resulting in 18 PPRVs per beaker. The sand served as a temperature distribution medium to ensure that all of the sources maintained a uniform temperature as each temperature treatment was applied. These beakers were then placed inside a 20 L water bath capable of maintaining temperatures from ambient room temperature to 100°C with a uniformity of ± 0.2°C, a stability of ± 0.25°C, and a precision of ± 0.1°C. The six chosen temperatures were applied randomly in time over the course of approximately 109 days as shown in Table 6.2. Table 6.2. Temperature treatments. Time Temperature Order (°C) 1 35 2 45 3 35 4 25 5 45 6 25 7 30 8 40 9 50 10 50 11 40 12 30 The PPRVs were given 48 hours to equilibrate with each new temperature treatment. After the 48 hour period, the mass of each source was recorded once per day over a period of five days. At the conclusion of the monitoring period, the change in mass per unit time was determined for the final analysis. The experiment uses a strip-plot design model with two factors and two covariates. The first factor was the combination of plug thicknesses and PMCH fill levels. This factor was applied randomly to the first whole-plot unit of the PPRVs. Each of the 16 treatment combination levels was given two replicates. The one exception was the 0.600 mL PMCH fill level which was assigned three replicates. A similar fill level had already been successfully applied in previous 122
Virginia Tech
studies. As a result the 0.600 mL treatment was assigned the extra replicates in order to provide a highly precise base for comparing the effect of the other fill levels. The second factor was temperature, which was assigned randomly to the second whole-plot unit of time in weeks. Two replicates were used for each of the six temperature treatment levels. The response release rate (i.e. change in PPRV mass per unit time) was obtained using the split-plots. The split-plots were determined using the intersection of the whole-plots. The two covariates, or variables that were present but not directly measured that may impact the response, were ambient room temperature and mean sea level barometric pressure. All factor and covariate terms were assumed to be fixed effects. Table 6.3 displays a subset of the experiment design. The first and second rows represent PPRV number 10 and PPRV number 8 respectively with their randomly assigned combination of plug thickness and PMCH level. The first and second columns display the temperature randomization, 35°C and 45°C, over the course of Weeks 1 and 2 respectively. The remaining portion of the experiment was a continuation of this table encompassing the remaining sources and treatments listed previously. Each temperature treatment was applied uniformly across all the plug thickness - PMCH fill level combinations. For example, the water bath was set to 35°C during Week 1 for all sources. No modifications were made to the sources in terms of plug thickness and PMCH fill level as the new temperature treatment of 45°C was applied during Week 2. Thus, each factor is held constant across all levels of the other factor defining a strip plot design. The plug thickness - PMCH fill level combinations were held constant to decrease amount of PMCH needed for the experiment. The application of a more traditional randomized design, such as a split plot design, would have required the manufacture of new PPRVs for each temperature treatment to maintain independent experimental units (EU). This type of experimental design would not have been cost or labor effective according to the constraints of this study. In order to improve the efficiency of time utilization, the temperature treatments were applied in a uniform manner across all PPRVs thus reducing the number of overall trials as well. 123
Virginia Tech
6.5 Discussion and Conclusions The results of the PPRV study presented in this paper was successfully able to derive an equation to predict release rate as a function of temperature and plug thickness. Figure 6.5 through Figure 6.9 present an overall graphical summary of the PMCH release over the course of the experiment. The graphs clearly show a distinct difference in the PMCH release rate, which is represented by the slope of data points, between difference temperatures. A significant difference can also be seen in Figure 6.9 between the release rates of sources with different plug thicknesses. These large release rate differences can also be seen in Table 6.5. Table 6.4 displays the % RSDs of the release rates between PPRVs of the same design parameters across all temperature treatments. The precision of the release rates varied from 4% to 25%. The higher % RSDs appear to be concentrated at the two larger plug thicknesses. These slightly elevated % RSDs may indicate that these release sources were assembled inconsistently. However, the low % RSDs reflected by the two thinner plug thicknesses contradicts this source of error considering all of the PPRVs were assembled at once. The sources of the increased variation from the larger plug thicknesses are more likely derived either from the allotted equilibrium time or the time interval between mass change measurements. The PPRVs were given 48 hours to equilibrate with each new temperature treatment before the mass changes were recorded. If the larger plug thickness sources had not fully equilibrated with the new temperature, the release rate would have been slightly skewed between the beginning and end of the measurement periods. This issue would have been intensified if significant thermal expansion had occurred at the higher temperature. The expansion of the plug along its longitudinal axis would have caused increased compression across the plug’s cross-section. Any substantial compression would decrease permeability while increasing equilibrium time. Similarly, the time interval between measurements, about 24 hours, may not have been sufficient to produce a mass difference within the range of sensitivity of the analytical balance. Despite the higher % RSDs, the precision of the PPRVs was still acceptable given the magnitude of the mass changes in question. 132
Virginia Tech
Table 6.8 displays the difference in the release rate between the 0.600 mL sources and the sources with the same plug thicknesses across the different fill levels at each temperature treatment. The 0.600 mL fill level was proven in past studies to produce a reliable release rate. However, the effect of PMCH depletion over time was not analyzed. The release rate is expected to naturally deviate once the vapor volume within the PPRV is reduced below a critical level at which time the vapor pressure will be insufficient to maintain a stable rate. In order to identify this critical point, different fill levels were used to represent different time periods within the lifetime of the release source. Table 6.9 displays the estimated lifetimes of the PPRVs up to an 80% to 95% depletion of its original PMCH volume. Any significant difference between the release rate of the 0.600 mL sources and the release rate produced by the other fill levels would indicate the limit of viability for the PPRVs. As can be seen in Table 6.8, the majority of the release rates vary by less than 10% from the release rate of the 0.600 mL PPRVs. Only a few exceptions exist in the table. Based on the inconsistent appearance of the higher deviations, the fill level itself is an unlikely source for this difference. The lack of any repeating pattern suggests that the logging of the mass change at these points contained an error that artificially increased the difference in release rate. As such, the comparison shown in Table 6.8 indicates that the internal PMCH does not affect the release rate within the constraints of the experiment. The ANOVA analysis presented in Table 6.10 indicates with an alpha level of 0.05 that the plug thickness, temperature, and the interaction between these two variables had a significant effect on the PMCH release rate of the PPRVs. The trend between the different plug thicknesses across temperature treatments in Figure 6.12 shows a rough linear relationship between temperature and release rate. The release rate was found to be directly proportional to temperature and indirectly proportional to plug thickness. Some relatively weak interaction effects, which is represented by the slightly skewed character of the plot, can also be seen in this figure. The weak interaction effects shown in Figure 6.12 slightly contradicts the conclusion of strong interaction given by the p-value of the effects test in Table 6.10. This inconsistency is due to the large sample size coupled with a smaller relative variance between the response from the plug thicknesses and the temperature. This characteristic gives the effects test a higher power to detect differences thus resulting in the conclusion of a more significant interaction. The ANOVA 133
Virginia Tech
results regarding the significance of temperature and plug thickness agree with the results presented in Table 6.5. With an alpha level of 0.05, the two covariates, ambient room temperature and barometric pressure, did not have a significant impact on the release rate. The largest concern between these two covariates was barometric pressure. If barometric pressure was found to be a significant effect, it would have needed to be included in the final regression. Theoretically, normal ranges of barometric pressure should not have a significant impact on vapor pressure, the driving force of the PMCH release, which was confirmed by this result. The insignificance of the two monitored covariates confirms that the major environmental factors in the laboratory did not affect the release rate. PMCH fill level was also found not to have a significant impact on the release rate. The insignificance of the PMCH fill level supports the low variance between PPRVs displayed in Table 6.8. This result strongly suggests that the release rate of the sources over time will not be affected by the depletion of PMCH given that the silicone plug and aluminum shell maintains their integrity. The ANOVA analysis supports the conclusion gained from the data in Table 6.4. The PPRVs were thus free of any major experimental errors. Using the significant effects, a regression equation was generated to predict the PMCH release rate as a function of temperature and plug thickness. The coefficient of determination, or model 𝑅2, is 0.94 for fitting the release rate with these two effects and their interaction. In order to improve the accuracy of the prediction, some second order terms were added in this regression equation. Other models involving higher level terms were tested as well. However, the added benefits of the additional terms did not compensate for the increased complexity of the model and the coefficients. The resulting regression equations are provided as follows where R is the release rage in milligrams per day (mg/day), A is the plug thickness in centimeters (cm), and T is the temperature in °C. 134
Virginia Tech
𝐴 A R = 0.6681−2.3973( )+0.0383∙T−0.0976∙( )∙T 2.54 2.54 ( 6.1 ) 𝐴 2 +0.0006∙𝑇2 +3.7104∙( ) 2.54 In Equation ( 6.1 ), a day is defined as 24 hours. The plug thickness range is from 0.635 cm (0.25 in) to 1.905 cm (0.75 in) and the temperature ranges from 25°C to 50°C. The model 𝑅2 is 0.94, indicating that this equation will accurately predict the release rate of PPRVs manufactured and utilized within the constraints of this study. Although this equation may be extrapolated beyond the plug thicknesses and temperatures defined in this paper, the accuracy of the outputted release rate cannot be guaranteed. Based on the ANOVA analysis in Table 6.10 regarding the covariates, Equation ( 6.1 ) is accurate through normal variations of barometric pressure such as those created by altitude changes, weather systems, and ventilation flows. Similarly the insignificant impact of PMCH fill level suggests that the PPRVs are viable through 80% to 95% of their estimated lifetimes. Figure 6.10 and Figure 6.11 shows that PMCH level does greatly affect the life of the source. Since any loss of silicone plug or aluminum shell integrity will compromise the release rate, the fill level should be tailored to fit the conditions present at the intended release area. For example, certain environmental conditions, such as high ultraviolet exposure, may cause the PPRV to degrade faster. Although this study was able to determine the response of the PPRV based on controlled laboratory conditions, an actual field deployment may elicit a different response. As a result, an underground mine field evaluation of the PPRV will be conducted to determine the performance of the sources in the field. 135
Virginia Tech
Chapter 7: Field test of a perfluoromethylcyclohexane (PMCH) permeation plug release vessel (PPRV) using a dual tracer deployment in an underground longwall mine ABSTRACT: Perfluoromethylcyclohexane (PMCH) has shown to be a viable alternative to the widely used tracer gas sulfur hexafluoride (SF ). PMCH and SF were released in a Midwestern 6 6 underground longwall mine. The operators of this mine graciously allowed full access to an active longwall panel during two stages of its advance, designated as Phase I and Phase II, to perform this study. This paper presents a study designed to determine the feasibility of deploying a PPRV in an underground environment for tracer gas studies. SF was also released in parallel 6 in a full ventilation characterization of the longwall panel to examine the ability of PMCH to compliment SF . The results of this study not only showed the PPRV to be a feasible tracer 6 release system in an underground environment but also highlighted the advantages of a dual tracer release. 141
Virginia Tech
7.1 Introduction Sulfur hexafluoride (SF ) has been the predominant tracer gas used in underground mine 6 ventilation studies for over 30 years (Thimons, Bielicki, and Kissell 1974). However, the ability of SF to function as the sole tracer is being hindered by two main issues: the increasing scale 6 and complexity of mine ventilation systems along with the steadily growing background concentration of SF in the atmosphere (Levin et al. 2010, Geller et al. 1997, Ravishankara et al. 6 1993, Maiss et al. 1996). In order to mitigate these issues, recent studies have identified the compound perfluoromethylcyclohexane (PMCH) as a viable supplement and compliment for SF . 6 PMCH is a perfluorinated cyclic hydrocarbon and is categorized as a perfluorocarbon tracer (PFT) due to its chemical inertness, low toxicity, and trace level background presence in the environment. These properties make PMCH suitable for use as a tracer gas (Dietz 1991, Watson et al. 2007). Compounds of this type have been widely implemented in heating, ventilation, and air conditioning (HVAC) as well as atmospheric monitoring studies (Dietz 1991) but not yet in the field of underground mine ventilation. PMCH exists as volatile liquids at room temperature and pressure (National Institute of Standards and Technology 2011). This physical property prevents PMCH from being released using conventional techniques designed for gases. Previous studies have presented a permeation plug release vessel (PPRV) designed to convert PMCH from a liquid to a vapor and release it into a flow stream in a controlled manner. Although many laboratory studies have been conducted using the PPRV, it has not yet been utilized in an underground mine. This paper presents the use of a simultaneous, steady-state release of SF and 6 PMCH in a Midwestern underground longwall mine. The purpose of this study was to determine the feasibility of the PPRV as a reliable system to release PMCH and also to examine its ability to compliment SF . 6 142
Virginia Tech
7.2 Background PMCH is a perfluorinated cyclic hydrocarbon whose chemical structure is composed of perfluoroalkanes (Watson et al. 2007). Compounds of this type are biologically inert, chemically inert, and thermally stable (F2 Chemicals Ltd. 2011). The inert, non-reactive, and non-toxic nature of PMCH makes it an ideal choice as a tracer gas. PMCH is comprised of seven carbon atoms and fourteen fluorine atoms, which gives it a chemical formula of C F . The molecular 7 14 structure of PMCH is displayed in Figure 7.1. Figure 7.1. Molecular structure of PMCH. The PMCH molecule is composed of two main parts, the cyclohexane ring and the methyl group bonded off to the side. This fully fluorinated molecule has a molecular weight of 350 g/mol and a boiling point of 76°C (169°F). The volatility of PMCH allows it to vaporize even at relatively lower temperatures. Once in a vapor state, PMCH, will remain a vapor even through cooler temperatures (National Institute of Standards and Technology 2011). Another advantage of PMCH is its detectability by a GC even at low concentrations. This ability stems from PMCH’s low ambient background in the atmosphere with concentrations in the parts per quadrillion (PPQ) (Cooke et al. 2001, Simmonds et al. 2002, Watson et al. 2007) and its high detection sensitivity derived from its molecular electronegativity (Simmonds et al. 2002). The basic concept of the PPRV used in this field study was initially introduced by Brookhaven National Laboratory (BNL). The basic PPRV design consists of a hollow aluminum cylinder with one end of the cylinder opened to the atmosphere and the other end closed. Liquid PMCH is 143
Virginia Tech
The high flow resistance caused by the silicone plug produces a pseudo-closed system allows the PMCH to reach dynamic equilibrium within the source. This equilibrium produces a steady pressure differential between the inside and the outside of the vessel equivalent to the vapor pressure produced at the ambient temperature. This differential causes the vapor PMCH to steadily diffuse through the silicone plug. The release rate will remain consistent as a function of temperature as long as the integrity is maintained and the compression of the plug is not excessive (Jordan and Koros 1990). Vapor pressure is affected most significantly by ambient temperature and is independent of atmospheric pressure. Thus the release rate can be predicted as a function of temperature. The thickness of the silicone plug also greatly affects PMCH diffusion as flow resistance increases with a thicker medium. As a result, a previous long-term study of the PPRV displayed in Figure 7.3 produced the following equation to compute the release rate of PMCH as a function of temperature and plug thicknesses. 𝐴 A R = 0.6681−2.3973( )+0.0383∙T−0.0976∙( )∙T 2.54 2.54 ( 7.1 ) 𝐴 2 +0.0006∙𝑇2 +3.7104∙( ) 2.54 This paper outlines a study utilizing the PPRV presented in Figure 7.3 in conjunction with Equation ( 7.1 ) to determine the feasibility of deploying this system in an underground environment for tracer gas studies. SF was also released as a part of this study for the full 6 ventilation characterization of the target longwall panel in conjunction with PMCH to examine the ability of PMCH to compliment SF . A dual tracer release of this nature in an underground 6 mine has not been attempted prior to this study. 145
Virginia Tech
7.3 Longwall Mine Overview PMCH and SF were released in a Midwestern underground longwall mine. The operators of this 6 mine graciously allowed full access to an active longwall panel during two stages of its advance, designated as Phase I and Phase II for this study. The ventilation around the panel is facilitated using a three entry headgate, a single entry tailgate, a three entry bleeder, and a fringe ventilation path. This mine utilizes a hybridized bleederless ventilation system to provide fresh air to the panel. As such, the tailgate consists of a single return entry that is not connected to the bleeders at the rear of the panel. The fringe ventilation system is another unique aspect of this type of ventilation system. A small amount of intake air is provided to the outside edges of the gob to ventilate any accumulation of gases. This ventilation stream only flows around the outside of the gob and is not designed to penetrate into the gob itself to limit the potential for spontaneous combustion. The fringe ventilation is directed to the rear of the panel where it joins the bleeders through small pipes located in the headgate and the tailgate. Intake air is delivered through three entries in the headgate. The three intake branches combine to carry fresh air both across the longwall face and into the fringe. The ventilation flow across the longwall face is carried into the tailgate and then exits the mine through the main return. A basic schematic diagram of the ventilation system is displayed in Figure 7.4. 146
Virginia Tech
7.4 Experimental Design As previously introduced, the tracer study of this panel was accomplished using a simultaneous steady-state release of SF and PMCH. The tracer study was completed in two separate phases, 6 Phases I and II, representing the ventilation system near the start of the panel and near the end of the panel. Phases I and II were executed consecutively with a delay of approximately two months (50 days) between studies. During this delay, the longwall advanced an additional 802 m (2,630 ft) from the Phase I position. The tracer flow around the panel was monitored at nine locations. These nine locations represented all of the primary ventilation branches serving the longwall panel. A description of the release points (RP) and sampling points (SP) are presented in Table 7.1. Table 7.1. Release point and sampling point descriptions. Station Name Description RP1 Bleeder fan intake entry inby the last open crosscut (SF ) 6 RP2 Longwall shield adjacent to the tailgate (PMCH) SP1 Main fan intake entry just outby the face SP2 Belt entry outby the face SP3 Fringe entry SP4 Center of the face SP5 Headgate bleeder tap SP6 Tailgate bleeder tap SP7 Fresh air entry in the bleeders Consists of the SP8A, SP8B, and SP8C sampling points SP8 representing the three tube bundles in the tailgate SP9 Tailgate entry outby the face All of the sampling points were located in open airways except for SP8. SP8 represents the outlet of a tube bundle system consisting of three tubes extending various distances into the gob. These tubes were placed in order to detect any flow communication between the longwall face and the gob. Both phases of the study utilized the same relative locations points located at different absolute locations based on the position of the longwall. The general locations of the release point used for both tracer study phases are graphically represented in Figure 7.5. Detailed views of the release and sampling points are presented in Figure 7.6 and Figure 7.7. 148
Virginia Tech
Figure 7.7. Detailed view of the SP5, SP6, and SP7 sampling points. Figure 7.6 shows that SF was released in the intake entry supplied by the bleeder fan just inby 6 the last open cross-cut while PMCH was released near the tailgate from the last longwall shield. PMCH was released at this location in order to isolate the PMCH release to the single tailgate entry. These two separate release points were chosen not only to determine the performance of the PPRV but to also examine the multi-zone analysis potential of a dual tracer release. From the assigned intake entry, SF was deployed using a mass flow controller that provided a 6 steady stream of the tracer at 200 standard cubic centimeters per minute (SCCM). A SCC of gas is defined as the mass of the gas that occupies a cubic centimeter of volume at standard temperature and pressure (STP). Assuming ideal gas behavior and a SF density of 6.0380 g/L at 6 a STP of 25°C and 14.696 psia as defined by the manufacturer, the 200 SCCM mass flow is equivalent to 1.21 g/min. 100 PPRVs filled with 0.600 mL of PMCH and press-fitted with 0.635 cm (0.25 in) plugs were placed at the last longwall shield. This number of PPRVs were deployed to provide an adequate concentration of PMCH in the ventilation flow stream to be detected using a GC. For Phase I, the 150
Virginia Tech
PPRV bundle was set on an elevated flat area adjacent to the shearer’s electrical track. For Phase II, the PPRV bundle was suspended from one of the longwall shield’s hydraulic arms. Since the vapor diffusion of PMCH from the PPRV is a function of ambient temperature, the release rate could not be actively controlled. As previously introduced, the release rate is dictated by the following equation. 𝐴 A R = 0.6681−2.3973( )+0.0383∙T−0.0976∙( )∙T 2.54 2.54 ( 7.1 ) 𝐴 2 +0.0006∙𝑇2 +3.7104∙( ) 2.54 As a result, the ambient temperature was monitored throughout the study to provide an average release rate. Barometric pressure and relative humidity were also monitored to record any drastic shifts. However, these two variables are not expected to significantly impact the PPRV. After the tracer gases were released, air samples were taken at regular intervals for several hours. For Phase I, air samples were taken at 15 min intervals for a six hour period. Although the tracers should have rapidly achieved a homogeneous distribution in the open entries, this time period was allotted to capture SF at SP5 and SP6 in the bleeders. The high resistance, low flow design 6 of the fringe ventilation branch was expected to delay the travel of SF thus increasing the time 6 required to reach a steady-state concentration. As a result, the six hour period was selected in order to accommodate a reasonable amount of tracer travel time. The SP8 tube bundles in this study extended approximately 300 ft, 400 ft, and 500 ft into the gob from the tailgate. The longwall was idled for the entire duration of the Phase I study. For Phase II, air samples were taken at 30 min or 60 min intervals, depending on the sampling location, for a seven hour period. The original study design called for a 12 hour sampling period to account for the increased linear travel distance in the fringe of approximately 802 m (2,630 ft). However, logistical complications decreased the available time to seven hours. The scheduled study period intersected with the activation of the longwall near the end of allotted time. As a result, Phase II represented six hours of idled longwall ventilation and one hour of active 151
Virginia Tech
longwall ventilation. The SP8 tube bundles for this phase extended 200 ft, 300 ft, and 400 ft into the gob from the tailgate. 7.5 Experimental Results Due to the large amount of data collected for this study, the results are organized in subsections based on study phase and tracer gas. 7.5.1 Phase I SF Results 6 The Phase I vacutainer samples were analyzed using a gas chromatograph (GC) equipped with an electron capture detector (ECD) for SF . PMCH was analyzed using a slightly modified 6 approach that will be discussed in later sections. The GC was installed with a 30 m porous layer open tubular (PLOT) column coated with sodium sulfate deactivated alumina oxide. The column has an internal diameter (ID) of 0.25 mm and a film thickness of 5 µm. Table 7.2 displays the method parameters used to analyze SF . 6 Table 7.2. GC analytical method for Phase I. Parameter Description Sample Injection Size 100 µL Carrier Gas He Injector Temperature 150°C Split Ratio 30:1 Linear Velocity 35 cm/s Isothermal Column Temperature 65°C Detector Temperature 200°C Make-up Flow N at 30 mL/min 2 Total Program Runtime 2.5 min In order to determine the steady-state concentrations at the sampling points, a calibration curve was generated within the range of the data. The calibration curve is represented by the equation 𝐴 = 205.53𝐶 +4414.48 where A is the GC area count response in Vmin and C is the SF 6 concentration in parts per billion by volume (PPBV). This equation was produced from a set of 152
Virginia Tech
As expected, the sampling points located in open airways showed the presence of SF . The 6 presence of the tracer at these locations confirms that the flow paths represented by sampling points SP1 – SP4, SP5, SP6, and SP9 are directly connected to RP1. These results also confirm that the air is flowing toward these sampling points from this release point. The processed data is displayed in Table 7.3. Table 7.3. Phase I summary of results. SF 6 Sampling Concentration Point (PPBV) % RSD SP1 159.09 11.45 SP2 168.23 7.91 SP3 167.13 7.18 SP4 173.98 13.43 SP5 152.52 9.57 SP6 173.76 20.83 SP7 0.00 N/A SP8-1 90.96 125.92 SP8-2 58.85 63.15 SP8-3 67.30 48.68 SP9 140.44 18.38 The % RSD, or percent relative standard deviation, shown in Table 7.3 is a measure of analysis precision between subsamples at each sampling point. All individual samples were analyzed in triplicate in order to determine % RSD to indicate GC analysis precision. The % RSD between the different sampling points, excluding the SP8 tube bundle, showed an acceptable level of consistency given the concentrations at which the tracer was present. The overall % RSD for all points except the tube bundles was 7%, demonstrating that all locations were sampled consistently and that a similar steady-state tracer concentration was achieved. The results of the tracer analysis are also displayed graphically in Figure 7.10 through Figure 7.12. 154
Virginia Tech
7.5.2 Phase I PMCH Results The average temperature recorded at SP4 was used in Equation ( 7.1 ) to determine the average release rate of PMCH. Based on an average temperature of 24°C (75.4°F) and a PPRV plug thickness of 0.635 cm (0.25 in), the release source bundle was expected to produce an average release rate of 7.01·10-5 g/min. Using the surveyed quantity of 1,780 m3/mi (63,000 CFM) from the tailgate, the steady-state concentration of PMCH at SP9 is expected to be 3.35 PPTV. Due to the magnitude of the concentration, GC-ECD provided an inadequate detection sensitivity for the samples. As a result, a slightly different analysis technique was developed to quantify PMCH. The vacutainer samples were analyzed using a gas chromatograph (GC) equipped with a single quadrapole mass spectrometer (MS) modified for negative ion chemical ionization (NCI). The chemical ionization gas was methane (CH ). The GCMS was modified for NCI because 4 traditional electron impact (EI) ionization provided an inadequate detection sensitivity. PMCH has a high electron affinity thereby facilitating the formation of negative ions. Given the soft ionization mechanism of NCI, the preservation of PMCH’s negative molecular ion provided an exceptional detection sensitivity. The NCI-GCMS, identical to the GC-ECD systems described in Section 7.5.1 and Section 7.5.3, was installed with a 30 m porous layer open tubular (PLOT) column coated with sodium sulfate deactivated alumina oxide. The column has an internal diameter (ID) of 0.25 mm and a film thickness of 5 µm. Table 7.5 and Table 7.6 display the method parameters used to analyze PMCH. 157
Virginia Tech
Table 7.5. GC method used for PMCH analysis. Parameter Description Sample Injection Size 150 µL Carrier Gas He Injector Temperature 150°C Split Ratio Program Splitless for 0.50 min and then split at 30:1 to sweep injector port. Linear Velocity 30 cm/s Column Temperature Program 50°C initial temperature is held for 0.10 min. Temperature then increases at 40°C/min to 170°C. 170°C is held for 1.00 min. Temperature then decreases at 60°C/min to 120°C. 120°C final temperature is held for 8.07 min. Total Program Runtime 13.00 min Table 7.6. NCI-MS method used for PMCH analysis. Parameter Description Interface Temperature 185°C Ion Source Temperature 195°C Threshold 260 MS Scan Mode SIM SIM Target m/z 350 Scan Time 11.51 min to 13.00 min with an event time of 0.42 sec Total Program Runtime 13.00 min The added complexity of the splitless NCI-GCMS method compared to the GC-ECD method was necessary to achieve three primary goals: focusing of the PMCH aliquot within the column, separating the PMCH peak from contaminants with similar molecular weights, and removing background noise through selected ion monitoring (SIM). A chromatogram produced by this method is displayed in Figure 7.13. 158
Virginia Tech
7.5.3 Phase II SF Results 6 The SF vacutainer samples were analyzed using a GC-ECD for Phase II. PMCH was analyzed 6 using a slightly modified approach that will be discussed in later sections. The GC was installed with a 30 m porous layer open tubular (PLOT) column coated with sodium sulfate deactivated alumina oxide. The column has an internal diameter (ID) of 0.25 mm and a film thickness of 5 µm. Table 7.9 displays the method parameters used to analyze SF . 6 Table 7.9. GC analytical method for Phase II. Parameter Description Sample Injection Size 100 µL Carrier Gas He Injector Temperature 150°C Split Ratio 30:1 Linear Velocity 30 cm/s Isothermal Column Temperature 50°C Detector Temperature 200°C Make-up Flow N at 30 mL/min 2 Total Program Runtime 2.5 min In order to determine the steady-state concentrations at the sampling points, a calibration curve was generated within the range of the data. The calibration curve is represented by the equation 𝐴 = 596.52𝐶 −7558.77 where A is the GC area count response in Vmin and C is the SF 6 concentration in PPBV. This equation was interpolated from a set of laboratory mixed standards with a regression (R2) value of 0.99. The calibration curve is displayed in Figure 7.1. 162
Virginia Tech
The volumetric flow of SF at RP1 was determined using the ambient atmospheric conditions 6 recorded at SP1 with a 200 SCCM mass flow (SF ) assuming ideal gas behavior. 6 7.5.4 Phase II PMCH Results The average temperature recorded at SP1 was used in Equation ( 7.1 ) to determine the average release rate of PMCH. Based on an average temperature of 21.6°C (70.8°F) and a PPRV plug thickness of 0.635 cm (0.25 in), the release source bundle was expected to produce an average release rate of 6.20·10-5 g/min. Using the surveyed quantity of 1,900 m3/min (67,100 CFM) from the tailgate, the steady-state concentration of PMCH at SP9 is expected to be 3.60 PPTV. The vacutainer samples were analyzed using a NCI-GCMS with the same method outlined in Section 7.5.2. The results of the NCI-GCMS analysis of PMCH are displayed in Table 7.12 and Figure 7.21. The SP8 tube bundle data are not presented because the concentration of PMCH was below the LOD. The following data was produced using the same calibration curve outlined in Section 7.5.2. Table 7.12. Phase II PMCH results. PMCH Sample Time Area Concentration Number (HH:MM) Counts (PPTV) SP9-7 2:30 992.3 13.22 SP9-8 3:00 932.0 12.37 SP9-9 3:30 591.3 7.63 SP9-10 4:00 858.7 11.35 SP9-11 4:30 515.0 6.56 SP9-12 5:00 468.3 5.91 SP9-13 5:30 345.0 4.19 SP9-14 6:00 356.0 4.34 SP9-15 6:30 314.3 3.76 SP9-16 7:00 306.7 3.66 SP9-17 7:30 282.0 3.31 SP9-18 8:00 340.0 4.12 SP9-19 8:30 1241.0 16.68 SP9-20 9:00 1531.7 20.73 167
Virginia Tech
7.6 Discussion and Conclusions 7.6.1 SF Tracer Characterization 6 The front of the SF stream was immediately apparent at the majority of the sampling points, for 6 both Phases I and II, which can be seen in Figure 7.10 and Figure 7.17 respectively. The only exceptions were SP7, which was located in an isolated airway, SP5 and SP6, which were located at the bleeder taps, and the SP8 tube bundles. A discussion of these points is provided later in this section. At the sampling points showing immediate tracer presence, SP1 – SP4 and SP9, the concentration distribution shows that the SF required approximately 30 min to become evenly 6 distributed in the Phase I and II flow streams. This rapid equilibration time demonstrates that the airflow was fully turbulent and traversing the connecting distances quickly. Although air quantities could not be derived from the tracer concentrations at any point other than SP1, the presence of SF coupled with their uniform concentrations at the sampling points demonstrates 6 that the tracer samples were taken in fully developed turbulent flows for both phases of the study. In addition, no significant differences were found between the flow patterns of Phases I and II at the locations represented by SP1 – SP4 and SP9. Using the concentration of SF at SP1 and the volumetric flow rate of SF from RP1, the airflow 6 6 quantity at SP1 is calculated to be 55 kcfm for Phase I and 130 kcfm for Phase II. For Phase I, the calculated value was approximately 26 kcfm less than the surveyed airflow quantity of 81 kcfm. For Phase II, the calculated value was approximately 46 kcfm more than the surveyed airflow quantity of 84 kcfm. This discrepancy between the surveyed value and the calculated value of both phases may have resulted from one of three primary reasons: the barometric pressure and temperature readings did not have adequate accuracy, the ventilation survey data contained a discrepancy, or the vacutainer samples were taken at a point in the entry’s cross- section with a particularly low SF concentration. Based on the proximity of the sampling point 6 to the air direction change, the physical air sample was most likely taken in an area of layered SF concentrations in the entry for both phases. This conclusion can be inferred from the erratic 6 concentration changes displayed in Figure 7.10 and Figure 7.17 for Phases I and II respectively. 169
Virginia Tech
The SP1 flow pathway separates into the SP2, SP3, and SP4 flow pathways. From SP1, the average tracer concentration reflects a volumetric SF flow of 0.013 cfm for Phase I and 0.007 6 cfm for Phase II being delivered from SP1 to the aforementioned branches. These volumetric flows were computed assuming ideal gas behavior of the 200 SCCM mass flow at the recorded environmental conditions. Based on flow conservation, the volumetric tracer flow from SP1 should equal the sum of the volumetric tracer flows from SP2, SP3, and SP4. However, the total flow of the three branches did not balance in terms of quantity for either phase. The Phase I flow is approximately 0.004 cfm greater than the measured tracer flow at SP1. Similarly, the Phase II flow is approximately 0.001 cfm greater than the measured tracer flow at SP1. In order to determine the tracer flow at SP4, the airflow at SP9 was used due to the absence of survey data on the longwall face. These discrepancies equate to an error of 33% and 16% for Phases I and II respectively. The flow discrepancy indicates that SF was added to one of the airflow streams. 6 The non-conservation of tracer quantity may be the result of leakage from the No. 2 entry to the No. 3 entry that occurred between SP1 and SP3, of an unknown recirculating event at one of the sampling points, or of an error in sampling. Incidentally, a stopping door near SP1 was left open during both studies. This door was located between SP2 and SP3. A short circuit between the No. 2 and No. 3 entries was likely created as a result of the open door. Some of the tracer gas from RP1 would have been diverted into the No. 3 entry prior to reaching SP2. Leakage is thus the most likely explanation of this abnormality. The loss of tracer to SP3 in the airflow stream would have occurred prior to reaching SP2 thus causing an artificially inflated value in the sum of tracer flows from SP2, SP3, and SP4. However, given the low magnitude of the tracer concentration, these errors may not be significant for either phase. The lack of tracer presence at SP7 for Phases I and II was expected and confirms that this location did not interact significantly with any of the airways directly downwind from the release point. SP7 was the only sampling location at which SF was not detected. Both SP5 and SP6 6 showed a gradual increase of tracer gas over time. These points were located at the headgate and tailgate bleeder taps respectively. Air is delivered to these taps through the edges of the gob (i.e. the fringe) from the SP3 intake branch. As can be seen in Figure 7.11, SP7 and SP8 required approximately four hours and six hours respectively to reach an equilibrium tracer concentration 170
Virginia Tech
for Phase I. The time required to reach a steady-state concentration was not captured for Phase II due to logistical complications. Based on the trend in Figure 7.18, SP5 and SP6 would have required more than seven hours to reach an equilibrium tracer concentration for Phase II. The slow increase of tracer concentration over time reflects the low flow, high resistance design of the fringe ventilation system. The arrival time of the tracer front between Phases I and II was approximately 1.5 hours longer for Phase II than for Phase I. This comparison is displayed in Figure 7.20. The longer travel time is expected due to the fact that the longwall face had advanced further down the panel thus resulting in a longer distance from the bleeders. The samples collected from the Phase I 300 ft, 400 ft, and 500 ft tube bundles at SP8 showed the presence of SF but this data is inconclusive. The SF concentrations from the tube bundle are 6 6 presented in Figure 7.12. The random appearance of SF at various magnitudes in the tube 6 bundles suggest the presence of a leak in the tube system. This leak may have been in the tube itself, in the sampling system, or a combination of both. Any one of these three scenarios would have compromised the tracer samples thus contaminating these samples. As such, the results of this study could not identify and inter-zonal interaction between the face ventilation and the gob for the Phase I study. The samples from the Phase II 200 ft, 300 ft, and 400 ft tailgate tube bundles at SP8 are displayed in Figure 7.19. The SF concentrations from the tube bundles does indicate some 6 interaction between the longwall face ventilation and the gob at the 200 ft (SP8A) and 300 ft (SP8B) tube bundles. Although Figure 7.19 shows that a low concentration of SF was 6 immediately present at each of the tube bundles, a leak in the tube system was not likely given the low SF magnitude and the consistency of the concentration over time. This behavior was 6 reflected by SP8B and SP8C. The double-sided needles used to take the tube bundle samples have a small amount of headspace present in its internal volume. The consistent, low concentration of SF can be attributed to the internal volume of the needle and can be considered 6 as zero SF presence. 6 171
Virginia Tech
The SP8A tube bundle showed a gradual increase of SF over time with the tracer first appearing 6 between two to three hours after the initial release. The increase in tracer concentration and apparent achievement of steady state concentration suggests that the face ventilation did have some flow 61 m (200 ft) into the gob. The lack of significant SF presence at SP8B and SP8C 6 suggests that the interaction with the gob is restricted to a boundary located between SP8A and SP8B. SF does not penetrate deeply enough into the gob to affect SP8C located 122 m (400 ft) 6 into the gob. A sharp increase in tracer concentration was found at SP8B at the end of the sampling period. This sudden buildup of SF may have been caused by two primary reasons: a collapse in the gob 6 opened a free volume between SP8A and SP8B thus producing a direct flow path between the inlets of these two tube bundles or a failure in the tube bundle/sampling system had occurred. This sharp rise in tracer concentration coincided with the start of the longwall for the morning shift. As such, either scenario was probable. The SF portion of the tracer study for both studies was free of major errors as reflected by the 6 precision of the data and analysis technique. The qualitative tracer gas data showed that the ventilation flow streams did travel from RP1 to the expected branches. The lack of detectable tracer presence at SP7 confirmed that this point was located in an isolated airway with no communication from the flow streams downwind of RP1. The time required to achieve a steady- state tracer concentration at the sampling points indicated that SP1, SP2, SP3, SP4, and SP9 were located in open, unobstructed airways while SP5 and SP6 were located in areas with low, restricted airflow. Some interaction between the gob and the longwall face ventilation was found for Phase II but not for Phase I. This interaction during Phase II was derived from the SP8 tube bundle system. 7.6.2 PMCH PPRV Evaluation The PPRVs successfully released a detectable level of PMCH for both Phases I and II. The PMCH analysis results are displayed in Section 7.5.2 and Section 7.5.4. The Phase I results did not provide enough useable quantitative data for a thorough evaluation of the PPRVs. The 172
Virginia Tech
majority of the NCI-GCMS data did not provide a sufficient signal to noise ratio for quantification. As a result, only the quantifiable vacutainer samples are presented. However, the presence of PMCH was detected in all of the SP9 tailgate samples. The positive presence of PMCH at an unquantifiable concentration suggests that the PPRVs were located in area of the longwall face that did not allow for homogeneous mixing of the tracer, the SP9 samples had been compromised, or a combination of both. Either event may have occurred given the placement of the sources at a relatively complex area adjacent to the belt conveyor combined with the fact that the samples were not analyzed for several weeks due to equipment complications. Despite the lack of quantitative data provided by Phase I, the presence of PMCH does show that the PPRVs were releasing PMCH. The Phase II results did successfully provide quantitative data derived from the PPRVs. The NCI-GCMS analysis results are presented in Table 7.12. The Phase II PMCH data is also presented graphically in Figure 7.21. The distribution of PMCH over time demonstrates that concentration did reach steady-state prior to the activation of the longwall shearer. Figure 7.21 shows that PMCH required approximately three hours to reach an equilibrium concentration. This increased equilibrium time as compared to SF may have been due to the PPRVs 6 acclimating to the new environmental conditions, the high molecular weight of PMCH, or the manner in which the PPRVs were deployed. Prior to the study, the PPRVs were transported underground approximately 12 hours prior to the study to allow for temperature acclimation. However, the PPRVs were not placed at the release point. As a result, the temperatures between the two locations may have been sufficiently different to require additional time to reach a steady release rate. In contrast to Phase I, the PPRVs were hung from one of the rear hydraulic arms of the longwall shield for Phase II. The PPRVs were set up in this manner because the Phase II study period intersected with the start of the longwall shearer. As a result, the PPRVs had to be situated in a manner that allowed the bundle to move with the advance of the shield. The rear of the longwall shield did not have a large amount of airflow at the time of the release. The inadequate flow coupled with the high molecular weight of PMCH may have also caused the higher relative equilibration time. 173
Virginia Tech
The large spike in tracer concentration occurred just after the start of the longwall shearer. The increase may be been caused by a rise in temperature, a change in ventilation flow, or the opening of a cavity that had been accumulating with PMCH prior to the movement of the shield. Due to the lack of observations at the PMCH release point, the actual cause for the sharp increase is unknown. Despite this unexpected occurrence, the PMCH concentration did achieve and maintain steady-state for approximately three hours. Based on the surveyed air quantity of 1,900 m3/min (67,000 cfm), an average temperature of 21.6°C (70.8°F), and an average absolute barometric pressure of 63,400 Pa, the steady-state PMCH concentration was expected to be 3.60 PPTV. The average steady-state concentration measured from the SP9 vacutainer samples was 3.90 PPTV. The difference between the expected and measured concentration equates to an error of 8%. At the PPTV concentration level, an 8% deviation effectively denotes a zero difference between the expected and observed values. Using the detected steady-state concentration of PMCH, the air quantity is calculated to be 1,755 m3/min (62,000 CFM), which is validated by the results of the ventilation survey. As such, the PPRVs not only performed according to their design specifications but also supported the ability of Equation ( 7.1 ) to predict the release rate of the PPRV. The results of Phase II suggest that the PPRVs will perform as expected in field conditions when placed in an area of adequate turbulent flow. The results of this study also demonstrate the potential of PMCH to supplement SF in tracer gas studies. The location of the SF release point prevented any 6 6 useful quantitative data to be derived from the tailgate. The location of the PPRVs remediate this problem by providing a secondary release point. Additionally, since SF and PMCH do not 6 interfere with each other when using a GC, these two tracers can be simultaneously analyzed and sampled using the same medium. Given the simplicity of the PPRV, several benefits were realized during the execution of the study. Two of the most prominent advantages were rapid setup and potential for deployment in inaccessible areas, such as the gob. These two advantages were afforded by the simplicity of the PPRV coupled with it passive release mechanism. The PPRV is thus shown to be a feasible system within the parameter of this study for the release of PMCH in underground mine ventilation studies. 174
Virginia Tech
Chapter 8: Conclusions and Future Work The study presented in this paper sought to develop a PPRV system that was not only flexible but that allowed for the controlled release of PMCH in an underground mine environment. In order to complete this objective, extensive laboratory and field studies were designed to bring the PPRV from conception to reality. The PPRV development and evaluation study successfully developed a PMCH calibration curve preparation technique for GC, completed an extensive evaluation of potential PPRV designs, deployed a potential PPRV in a small-scale turbulent environment, and completed a field study that validated the final PPRV system. A detailed explanation of each major topic was provided in the preceding chapters. The multi-dilution calibration curve technique was found to balance level of difficulty with precision thereby producing a viable method for producing PMCH standards. This calibration curve technique proved to be repeatable in the PPRV evaluation studies that followed. The preliminary PPRV designs were found to produce a reliable PMCH release rate and to be free of manufacturing defects. This preliminary study found that plug thickness and temperature significantly affected the release rate. From this data, a comprehensive strip-plot experiment was designed to derive an equation to calculate PMCH release rate as a function of plug thickness and ambient temperature. The results of the final PPRV development study successfully derived an equation to predict release rate as a function of temperature and plug thickness. Additionally, this study confirmed that release rate was independent of both barometric pressure and internal PMCH volume. The PPRV will thus perform consistently across a wide range of elevations and maintain its release rate throughout the majority of its expected lifetime. In a parallel study, the newly developed PPRVs were evaluated in a controlled turbulent environment. The result of the turbulence experiment showed that the PPRVs were highly precise over a range of flow quantities. Air flows within the transitional characteristic zone were found to cause the PPRV to behave unreliably. Since transitional type flows contain both laminar and turbulent elements, this erratic behavior agrees with the fact that PMCH has a high layering potential from its high molecular weight. Although laminar flows were not included in the evaluation, poor PPRV performance is also expected in the laminar zone due to the inadequate 177
Virginia Tech
mixing of PMCH in the flow stream. The final study presented in this paper evaluated the performance of the PPRV in a Midwestern underground longwall mine. In this field study, the PPRVs performed not only according to their design specifications but also validated the equation derived in the final development study to predict the release rate of the PPRVs. This study also represented the first time that a dual PMCH – SF release has been 6 conducted in an underground mine. The execution of the field study proved to be relatively straightforward when using the PPRVs when compared to SF . The simplicity of the PPRV 6 allowed rapid setup of the system. The PPRVs’ passive release mechanism also allowed the potential for deployment in inaccessible areas of the mine such as the gob. The results suggest that the PPRVs will perform as expected in an underground mine if the sources are given sufficient time to equilibrate with the environmental conditions. The release sources must also be placed in an area of adequate turbulent flow and be deployed in adequate numbers to satisfy the LOD. The operating principles of the PPRV show great potential for adaptation to release other similar perfluorocarbon tracers in underground mines. Based on the combined results of this overall study, the PMCH PPRV developed in the preceding chapters was found to be a feasible release system for underground tracer gas studies. This feasibility is, however, limited to the parameters of each of the aforementioned studies. The PPRVs can be enhanced with additional studies designed to develop standard operating procedures (SOP), to examine the effects of release source diameter and plug compression, to examine the effects of sub-ambient temperatures, to investigate the impacts of different underground environments on the PPRV, to explore different techniques for dual tracer releases, as well as to improve trace-level analytical techniques. The current iteration of the PPRVs would benefit from a comprehensive development of SOPs for tracer studies. The study presented in this paper introduced some recommendations for the use of PPRVs such as suspending the PPRVs in turbulent flow. However, these suggestions were anecdotal in nature and were not formally part of the overall study. A formal study for SOP development would examine the impact of a variety of variables including, but not limited to, different flow path geometries, proximity to ventilation controls, and underground release 178
Virginia Tech
locations on the performance of the PPRV. The product of such a study would be to ultimately provide a user’s manual for the PPRV along with recommendations on the ideal use of the PPRV based on underground mining conditions. Given the deployment flexibility of the PPRV, the available release options are indefinite. As a result, a formal SOP study may benefit from being separated into two parts, a broad spectrum exploration of release scenarios to determine significant factors along with an in-depth study of these significant factors. Although numerous experimental designs are available for such a study, any future experiments should at least include performance as a function of PPRV placement at different locations in an entry’s cross-section, of PPRV placement in different mining areas such as various longwall faces, continuous miner sections, smooth entries, rough entries, overcasts, etc., and of PPRV proximity to abrupt flow path changes such as a turn, a contraction, an expansion, etc. The interactions of the different placement options should also be included for a more robust study. SOP development should include a discussion regarding the limitations of the PPRVs based on the LOD of various analytical techniques, the flow quantity, and the type of flow. Additionally, a cost-benefit analysis of the PPRV vs. the traditional SF release should be 6 given in order to provide recommendations for when the PPRV may be used in place of SF and 6 when the PPRV may be used in conjunction with SF . 6 One of the limitations of the PPRV is the relatively low release rate when compared to traditional release systems. In order to provide an adequate PMCH concentration with typical underground flow quantities, the PPRV introduced in this study should either be deployed in parallel with other PPRVs or be analyzed using a technique with a sufficient LOD. An increase in PPRV diameter may, however, be another manner in which the PMCH release can be increased. Future studies should examine the effect of this design variable as well as its interaction with plug thickness and temperature on the PPRV’s release rate. Increasing the PPRV diameter would result in a sympathetic increase in the exposed surface area of the plug. Greater surface area at a given plug thickness is expected to increase the release rate because of the higher volumetric flow potential. In conjunction with PPRV diameter, the range of temperatures can be expanded to include sub-ambient temperatures to interpolate a wider range of operating conditions. Due to the unknown effect of diameter on the PMCH release rate, close attention should be given to the 179
Virginia Tech
data produced by PPRVs with a high diameter to plug thickness ratio. As the release rate increases, a critical point will be reached at which the loss of PMCH through the plug exceeds the rate at which PMCH vaporizes. If this critical point is reached, a vapor pressure equilibrium cannot be maintained and would result in an unsteady release rate. Further development of the PPRV would benefit from an investigation regarding the impact of different environmental conditions on the integrity of the PPRV. The study presented in this paper showed that the PPRV’s release rate would remain consistent throughout at least 90% of the PPRV’s estimated lifetime. This long-term viability can only be maintained if the integrity of both the silicone plug and aluminum shell is not breached. The environmental conditions in underground mines can be extremely harsh with dripping water, suspended dust, and nebulized industrial chemicals. A formal study on the long-term effects of such environmental condition on the overall integrity of the PPRV would provide further insight into the operating constraints of the PPRV. The field evaluation of the PPRV presented in this paper represents the first time that both PMCH and SF have been simultaneously released in an underground mine. As a result, the 6 application of multiple tracers as a mine ventilation characterization tool has not yet been extensively explored. The multi-zone analysis ability afforded by multiple tracers has great potential to enhance the analysis of mine ventilation systems. In addition to the steady state dual release presented in this paper, numerous other quantitation techniques also exist, such as the pulse releases and tracer decay release methods. Although these additional quantitation methods have already been extensively studied in HVAC, further research is needed to translate these models for use in underground mines. The detection of PMCH and SF in this study was accomplished by sampling the tracer remotely 6 and analyzing the sample later using a GC. This manner of tracer gas analysis has been used numerous times in analog studies and has a well-established protocol. However, this method has two main drawbacks, increased probability of sample contamination and delayed results production. Although vacutainers have shown high sample stability, contamination can still occur through a reduction of stopper integrity through exposure to ultraviolet light and 180
Virginia Tech
Resources 2015, 4 940 Keywords: Mining; Coal Dust; Respirable; Occupational Health; Particulate Composition; Dust Characterization; SEM-EDX 1. Introduction A key consideration for responsible development of mineral and energy resources is the well-being of workers. Respirable dust in mining environments represents a serious concern for occupational health. Coal mine dust, in particular, has long been linked to various lung diseases like coal workers pneumoconiosis (CWP) and silicosis [1,2]. Implementation of dust regulations in the US beginning in the late 1960s has significantly decreased overall incidence of such diseases over the past several decades [2–4], but analysis of long-term surveillance data appears to show a recent and unexpected uptick in disease amongst some miners in particular geographic regions like Central Appalachia [3,5–7]. Such trends are alarming considering that most coal mines currently operate below regulatory limits on respirable dust (i.e., particulates with aerodynamic diameter <10 μm), which generally pertain to total mass concentration and crystalline silica content. These trends may suggest that other exposure factors, including specific dust characteristics such as particle composition, size, and shape distributions, may be important in the occupational health context. While MSHA’s new dust rule issued in April 2014 targets further reductions in respirable dust concentrations, it is unclear if or how the lowered limits will affect health outcomes for miners in locations where causal factors for disease are not well understood. The “new dust rule”' was first proposed by the US Mine Safety and Health Administration (MSHA) on 19 October 2010 and was finally issued under the title Lowering Miners’ Exposure to Respirable Coal Mine Dust Including Continuous Personal Dust Monitors on 23 April 2014 [8]. The rule makes a number of changes to previous regulations on dust limits and sampling in underground coal mines, and specifically will reduce the permissible respirable dust concentration from 2.0 to 1.5 mg/m3. It will also require use of continuous personal dust monitors (CPDMs) by mine operators, and require that citations be issued in any instances where MSHA-collected samples for single, full shifts exceed the new 1.5 mg/m3 limit. Indeed, more comprehensive characterization of coal mine dust is necessary to fully explore these factors. Currently, a standard methodology for comprehensive, particle-level characterization of coal mine dusts does not exist. This paper describes such a methodology, which uses scanning electron microscopy equipped with energy dispersive X-ray (SEM-EDX). Although not commonly applied to respirable mine dust samples, electron microscopy with EDX has proven useful in a variety of environmental and mineral processing/metallurgical applications for fine particulate analysis. It is increasingly being used to specifically understand chemistry and morphology of airborne particulates that represent health hazards—in occupational or ambient environments. For example, methodologies for analysis of nano-sized particulates in around active welding have recently been described [9,10]. A major objective of the method development included optimization of manual analytical efforts— i.e., minimizing the required SEM user time for each sample, while maximizing the range of valuable raw data types to be collected. The developed method includes particle-level analysis of composition, size and shape, from which mass and volume can also be estimated. Construction of automated
Virginia Tech
Resources 2015, 4 941 spreadsheet program for computational analysis is also described here, as well as preliminary verification of the dust characterization method using three samples collected in the field. 2. Description of Developed Dust Characterization Method The following sections provide a detailed discussion of the particle characteristics that are included in the developed dust characterization method, as well as a description of procedures used for dust sample collection and preparation, and selection and analysis of specific particles by SEM-EDX. Additionally, computation via an automated analysis program is described for easy analysis of raw data inputs. 2.1. Particle Characteristics of Interest To fully characterize particles, specific properties are of interest. Particle composition, dimensions, and shape are values which are determined with SEM-EDX, and volume and mass are calculated as a result of the analysis. These particle characteristics provide an abundance of data and information regarding respirable dust samples and aid in the comprehensive analysis of coal mine dust. 2.1.1. Composition Classification of the dust particles is based on their EDX spectra, which provides a graphical representation of the elements associated with the particle surface. The spectra are generated by detection of X-ray emissions from the particle, caused by interaction of the SEM electron beam with its surface; each element on the particle surface produces a characteristic X-ray when excited by the impinging electrons. Each peak of a spectrum, thus, represents a specific element, and relationships between peak heights can provide some indication of the elemental composition (i.e., minerals can be identified by their atomic stoichiometry). For relatively small particles, such as respirable dust particulates, the electrons may penetrate deep enough into the particle (e.g., to a depth of about 1 μm) to provide relatively good information about its overall composition. However, EDX analysis on small particles is also subject to interference from the sample background (i.e., if electrons penetrate completely through the particle or the electron beam is sufficiently close to the particle edge). For the developed dust characterization method, considerable effort was aimed at establishing a set of pre-determined compositional categories into which most particles in a coal mine dust sample would be expected to fit. As a preliminary effort, lab-generated dust samples were collected using run-of-mine (ROM) coal, consisting of coal and rock (i.e., primarily shale and sandstone) taken from an underground coal mine in Central Appalachia. The mine is considered “low seam” based on its average coal seam thickness of 24 inches. With an average extraction height of 40 inches, the operation is, thus, cutting about 16 inches of roof and floor rock during coal extraction. Dust was generated under a fume hood by pulverizing a sample split from the ROM multiple times. For each dust sample collected, a pump was operated at a flow rate of 5 L/min to collect dust onto a 37 mm diameter polycarbonate (PC) filter (0.4 μm pore size), which was positioned near the top of the fume hood, just below the suction fan; this arrangement was deemed appropriate to collect relatively fine dust over short time periods (i.e., 5–10 min) without the use of a cyclone or other size classifier. A cyclone
Virginia Tech
Resources 2015, 4 942 was not used to collect the laboratory samples, since the primary objectives were simply determination of the ROM mineralogy (i.e., such that appropriate particle composition categories could be identified), and development of standard procedures to be used during the SEM-EDX analysis. For more in-depth investigation of mineralogy, dust samples were also generated by pulverizing approximately pure rock and pure coal sub-samples hand-picked from the ROM. An FEI Quanta 600 FEG environmental scanning electron microscope (ESEM) (FEI Company: Hillsboro, OR, USA) equipped with a Bruker Quantax 400 EDX spectroscope (Bruker Corporation: Ewing, NJ, USA) was used. In conjunction with the SEM-EDX hardware, the FEI image analysis software and Esprit EDX software provided imaging and graphical spectra results. The ESEM was operated under high vacuum at 15 kV with an ideal resolution and a working distance of approximately 12–13 mm, which was observed to be optimal for this particular scope and application. To prepare collected dust samples for SEM-EDX analysis, filters were removed with clean tweezers, and on a clean, hard surface, a 9 mm diameter trephine (i.e., a cylindrical blade) and a clean razorblade were used to extract the center of the filter. The center sub-section was then attached to an SEM pin-stub mount with double-sided copper tape and sputter coated with gold/palladium (Au/Pd) to generate a thickness of about 10–20 nm (i.e., 60 s sputtering time) and create the conductive surface layer needed for electron microscopy analysis. Based on detailed analysis of the lab-generated dust samples using a number of EDX parameters, it was determined that twelve elemental peaks should be included in the developed coal mine dust characterization methodology: carbon, oxygen, sodium, magnesium, aluminum, silicon, sulfur, potassium, calcium, titanium, iron, and copper. Further, it was determined that most particles could be classified into six defined categories based on the peak height ratios: “carbonaceous”, “mixed carbonaceous”, “alumino-silicate”, “quartz”, “carbonate”, and “heavy mineral”. Although the ROM dust samples did not contain significant carbonate particles, carbonate particles are expected to be collected in field samples due to “rock dusting” programs in underground coal mines (i.e., applying pulverized inert minerals, such as limestone or dolomite, to coal and rock surfaces underground in order to reduce explosion propagation), For the relatively few particles that could not be classified into one of these six categories, a seventh category “other” was created. Table 1 provides examples of typical minerals associated with coal mine dust that fall into each of these categories, and defines the rules developed for compositional classification. These rules are fundamentally based on atomic abundance (i.e., atomic percentage equivalencies of primary minerals in each category), which are correlated to the real-time observed peak height ratios (i.e., Cps/eV) on EDX spectra of specific elements for each category. For the purpose of expedient decision making during SEM-EDX use, the observed peak heights are the main parameters used for characterization. Each of the six defined categories has one or more dominant elements (DEs), which are associated with the mineral(s) represented that category. For a particle to be classified into a given category, the observed DE spectral peak heights must exceed the minimums shown in Table 1. It should be noted that the atomic percentage equivalents shown in Table 1 are operationally defined (i.e., based significant experience of the authors and preliminary analysis of many known particle compositions), and are not representative of stoichiometry expected in the mineral(s) in each category. This is because significant interference from the filter background cannot be avoided for most particles in the respirable size range.
Virginia Tech
Resources 2015, 4 943 Table 1. Description of dust categories for particle classification by composition. Parameters for Real Time Classification Dust Category Example Mineralogy Classification (Atomic % (Raw Peak Heights Equivalents) (Cps/eV)) Carbon ≥ 70% Carbon ≥ 80 Carbonaceous Coal Oxygen ≤ 30% Oxygen ≤ 20 4% > Silicon ≥ 2% 20 > Silicon ≥ 10 Very thin clay minerals, Mixed 4% > Aluminum ≥ 2% 20 > Aluminum ≥ 10 or clay minerals with Carbonaceous Carbon > 70% Carbon ≥ 80 some carbon content Oxygen < 20% Oxygen ≤ 20 Silicon ≥ 4% Silicon ≥ 20 Alumino-silicate Clay minerals, feldspars Aluminum ≥ 3% Aluminum ≥ 20 Oxygen > 20% Oxygen > 20 Silicon ≥ 5% Silicon ≥ 20 Quartz Crystalline silica Oxygen > 20% Oxygen > 20 Calcium/Magnesium ≥ 5% Calcium/Magnesium ≥ 20 Carbonate Calcite, dolomite Oxygen > 20% Oxygen > 20 Carbon < 70% Carbon < 80 Iron/Titanium/Aluminum ≥ 5% Iron/Titanium/Aluminum ≥ 20 Heavy Mineral Pyrite, titanium oxides Oxygen > 20% Oxygen > 20 Other Diesel particulates, etc. Does not fit any of the above Does not fit any of the above Note: DE-Dominant element(s) are italicized for each defined category. For particles <1.5 μm in long dimension, DE content can be up to 50% less than the values noted in Table 1 for all defined groups with the exception of “carbonaceous”. It was found that the filter media increasingly influences the spectra of smaller particles, with carbon content increasing and DE content decreasing (see below for details). Indeed, it is well established that particles in this range often produce spectra that are influenced by electron penetration depth and/or electron scattering [11]. Electron penetration depth is generally defined as the depth at which the electron beam can penetrate the sample material. Thus, particles that are very small or thin may produce X-ray spectra that are greatly affected by filter background, and since the developed methodology for dust characterization utilizes PC filters, small particles or those with significant penetration depth are generally observed to exhibit apparently high carbon and oxygen abundances. For example, although crystalline silica particles (SiO ) should exhibit a silicon and oxygen 2 atomic percentages of roughly 47% to 53% based on stoichiometry, a respirable-sized particle on a PC background may show silicon and oxygen at 10% and 30%—with the balance being attributed carbon. Given the particle sizes in question, it is unlikely that poor liberation between materials (e.g., quartz particles ingrained in carbonaceous dust) is playing a significant role. To further illustrate, Figure 1 shows the spectrum for a PC filter, which has atomic percentage equivalencies of approximately 85% carbon and 15% oxygen, and Figure 2 shows spectra and actual SEM images of typical “carbonaceous” and “alumino-silicate” particles. The spectra of alumino-silicates should not inherently show high abundances of carbon, but the carbon peak is observed to be very high as an artifact of PC filter interference. The phenomenon of increasing carbon content with decreasing particle size is applicable for all defined dust categories.
Virginia Tech
Resources 2015, 4 944 Figure 1. Example spectrum of the PC filter media. The red peak on the left side of the spectrum is the peak associated with carbon, and the peak to the right of carbon is the oxygen peak. The small peaks between 2 and 3 keV are the peaks from the Au/Pd sputter coating, which should be present in all spectra when Au/Pd is used to coat the samples. (a) (b) Figure 2. Comparison of example spectra and images for carbonaceous (a) and alumino-silicate (b) particles at 12,500× magnification. The spectrum for the carbonaceous particle (L = 9.87 μm) has a relatively large carbon peak and a much smaller oxygen peak, while the spectrum for the alumino-silicate particle (L = 11.24 μm) has relatively large oxygen and carbon peaks, and aluminum and silicon peaks of similar height. To understand more about the particle size at which electron penetration depth may result in apparently enhanced carbon peaks, an experiment was conducted that examined quartz particles of decreasing size. To investigate, a ROM lab-generated dust sample was collected onto a PC filter. Under the SEM, the filter was scanned for quartz particles of varying sizes. Particles with a long dimension (L)
Virginia Tech
Resources 2015, 4 945 of roughly 0.7 μm, 1 μm, 1.5 μm, 2 μm, 2.5 μm, and 3.5 μm were found, and EDX spectra were observed for each. (L is simply the longest dimension visible for the particle, see e.g., [12]). Results showed that carbon peaks were higher for smaller particles; specifically, particles with L ≥ 1.5 μm had carbon peaks <80 Cps/eV and silicon peaks >50 Cps/eV, while particles with L < 1.5 μm had carbon peaks approximately 80 Cps/eV and silicon peaks approximately 20 Cps/eV. Thus, particles with L much less than 1.5 μm may have exceedingly small DE peaks; and, the rules for classifying such particles into each compositional category should make allowances for their larger carbon peak due to the probability of electron penetration and/or scatter. Understanding the carbon content of particles in the “mixed carbonaceous” category is particularly challenging. While their EDX results indicate that these particles have both alumino-silicate and carbonaceous characters, their identity and origin are not definitely known. Several possibilities exist. Most likely, particles classified as “mixed carbonaceous” are actually very thin and platy alumino-silicate particles, which are influenced ever more than other alumino-silicates by electron penetration. This prospect is supported by other recently published work by the authors [13]. Another possibility is that mixed carbonaceous particles may actually be alumino-silicates that are coated with ultrafine coal dust. Finally, it cannot be ruled out that this category could include clay mineral particles with some biogenic component, which seems possible considering the diagenesis of coal and surrounding sedimentary rock formations such as black shales. To determine the minimum carbon content that permits classification into the mixed carbonaceous category, an experiment was conducted that looked at dust particles on a copper background media; the copper tape ensured that any electron penetration would not result in an enhanced carbon peak, but rather in copper peaks. This experiment was aimed at determining if EDX spectra from “mixed carbonaceous” particles actually exhibited high carbon peaks due to their composition, or if such peaks are simply an artifact of significant electron penetration. An ROM dust sample was collected on a PC filter, and then some of the dust particles were transferred onto copper tape and prepared for SEM analysis by the usual sputter coating routine. Particles with L > 5 μm whose EDX spectra exhibited relatively high aluminum and silicon peaks were specifically studied. Upon analysis of 30 such particles, only four spectra were found to have carbon peaks >80 Cps/eV. These results indicate that, in most cases, the high carbon content in “mixed carbonaceous” particles is related to interference from the PC background. 2.1.2. Dimensions The long (L) and intermediate (I) dimensions of any particle analyzed can be determined directly from the SEM images using standard “line measurement” tools included in the SEM imaging software. I is the longest dimension perpendicular to L, which was defined above, in the same plane [12]. Following direct measurement of L and I (in μm), the short or third-dimension (S) can be estimated. Theoretically, S is the length dimension of a particle measured at a right angle to the plane in which L and I have been found; so S essentially describes particle thickness. Since different minerals have characteristic shapes, a unique ratio between S and I can usually be defined for a given mineral type. The unitless S:I ratio (R) is similar to the aspect ratio generally used in the field of sedimentology (e.g., see [14]). Alumino-silicate particles, for example, tend to be relatively flat with relatively small R values, whereas
Virginia Tech
Resources 2015, 4 946 quartz particles tend to be thicker with higher R values. Thus, based on the compositional classification of each dust particle by EDX and its measured I value, an S value (in μm) can be estimated by Equation (1): (cid:1845) = (cid:1844) × (cid:1835) (1) For the particle characterization methodology developed here, the R values assigned to each of the six defined compositional categories of interest are as follows: 0.6 for carbonaceous, 0.5 for mixed carbonaceous, 0.4 for alumino-silicate, 0.7 for quartz, 0.7 for carbonate, and 0.7 for heavy minerals. These constants are based on those commonly used in the field of sedimentology and extensive experience of the authors in electron microscopy analysis of mineral particulates. The mixed carbonaceous category R value is an average of the carbonaceous and alumino-silicate values since the identity of these particles is not definitively known. Dust characterized as “other” cannot be assigned an R value. 2.1.3. Shape, Volume, and Mass A variety of shape factors can also be computed for particles, including a measure of maximum projection sphericity (Ψ ), and the cross-sectional (d ) and spherical (d ) diameters. The Ψ value can be p c s p determined from the L, I and S dimensions using Equation (2), which was derived by Sneed and Folk (1958). Ψ is a dimensionless quantity and values range between 0 and 1; values that approach 1 are p associated with particle shapes that are increasingly spherical (i.e., L, I, and S are very similar), whereas values that approach zero are associated with particle shapes that exhibit relatively small S dimensions as compared to L and I [12]. The d and d values (in μm) can be computed from Equations (3) and (4), c s respectively. The cross-sectional diameter is the only calculated value based entirely on measured properties of particle size and is only accurate if the particle is a perfect sphere. The spherical diameter is more commonly used and is considered a better approximation of the particle size in aerodynamic applications [15]. Further, the spherical volume (V) can also be computed (in μm3) from Equation (5). By assigning approximate density values (ρ) to each compositional category, the particle masses (m) can additionally be estimated (in μg) using Equation (6). Based on average densities for the primary minerals expected in each of the six defined compositional categories (i.e., see [16]), the following ρ values (in g/cm3) have been assigned: 1.4 for carbonaceous, 2.0 for mixed carbonaceous, 2.5 for alumino-silicate, 2.6 for quartz, 2.7 for carbonate, and 4.0 for heavy minerals. The mixed carbonaceous class density is an average of the carbonaceous and alumino-silicate class densities. (cid:1845)(cid:2870) (cid:2869)/(cid:2871) (cid:2006) = (cid:4678) (cid:4679) (2) (cid:3043) (cid:1838) × (cid:1835) (cid:3013)×(cid:3010) (cid:1856) = (3) (cid:3030) (cid:2870) (cid:1856) = (cid:2006) × (cid:1838) (4) (cid:3046) (cid:3043) 4 (cid:1856) (cid:2871) (cid:1848) = × (cid:2024) × (cid:3436) (cid:3046) (cid:3440) (5) 3 2 (cid:1865) = (cid:1848) × (cid:2025) × 10(cid:2879)(cid:2874) (6) In addition to the shape factors noted above, particle angularity might also be considered. Angularity is an effective measure of the sharpness of the edges of a particle and, in the context of coal mine dusts,
Virginia Tech
Resources 2015, 4 947 may be important in controlling interactions between respired particles and lung tissue. Angularity can be rigorously determined by measuring the observed angles of particles on SEM images; however, particularly for small particles (i.e., with L ≤ 5 μm), such analysis would require significant time. Given that a stated goal of the dust characterization method developed here was to efficiently collect data, it was, therefore, decided that a qualitative evaluation of angularity should be employed; practically, this allows for collection of some potentially valuable information without requiring excessive analytical time. This type of classification of angularity has historically been applied to particles in the micrometer size range [17,18]. To qualitatively describe angularity, particles selected for characterization should be classified as rounded (r), transitional (t), or angular (a) by the SEM user, see Figure 3 [19]. Figure 3. Angularity classification categories based on the qualitative analysis of the sharpness of particle edges. 2.2. General Procedures for Dust Characterization In order to successfully analyze samples in a methodical manner, the collection, filter preparation, and analytical process should be sound. The following steps are outlined to provide the user with a detailed protocol to efficiently and effectively characterize respirable dust samples. 2.2.1. Sample Collection and Filter Preparation For collection of respirable dust samples in the field for SEM-EDX analysis, an appropriate pump deemed permissible for use in underground coal mines must be used; at present, the MSA Escort ELF pump is almost exclusively used for such applications because it has the capability to maintain near constant flow rate under a variety of environmental conditions [20]. To ensure collection of only respirable dust particles and, thus, rejection of particles above the respirable range, the pump should be operated with a cyclone at a flow rate between about 1.7–2.2 L/min [21], such that the cyclone median cut point is 4 μm according to the NIOSH 0600 method of sampling [22]. While compliance dust samples used for determining respirable mass concentration are generally collected on pre-weighed PVC filters, samples to be analyzed by SEM-EDX should be collected on PC, because they provide a suitable substrate (i.e., background media) for electron microscopy [11,23,24]. Filter cassettes should be unassembled two or three-piece types, such that the filters can be easily removed from the cassette for analysis. In preparing the dust samples for SEM-EDX analysis, filter cassettes are carefully unassembled and the filters are removed with clean tweezers. On a clean surface, a 9 mm diameter trephine and a clean razorblade are used to extract the center of the filter. The sub-section removed for analysis represents approximately 6% of the 37 mm filter. It is recognized that particle uniformity as a function of particle size may be variable for these types of filters, which can result in larger particles depositing toward the
Virginia Tech
Resources 2015, 4 948 center [25]; yet deposition is fairly radially symmetric [26]. Center filter analysis has been shown to provide reasonably precise results for field samples using two or three-piece cassettes [26]. As the main objective here is to provide relative comparisons between center filter sub-sections, some work has been completed to demonstrate that particles >0.5μm are uniformly distributed by number across the sub-section. This filter sub-section is then attached to an SEM pin-stub mount with double-sided tape (e.g., copper, carbon), and sputter coated with gold/palladium (Au/Pd) to create the conductive surface layer needed for electron microscopy analysis. It should be noted that carbon sputter coating cannot be used since this will interfere with composition analysis by EDX of the dust particles containing carbon, but other sputter coatings (e.g., platinum, Pt) might be considered. During development of the characterization method, it was observed that a coating thickness of about 10–20 nm (i.e., 60 s sputtering time) was optimal for preventing sample charging while allowing sufficient electron interaction with the dust particles to provide high-resolution SEM images and EDX spectra. 2.2.2. Particle Selection and Analysis by SEM-EDX Following dust sample collection and filter preparation, SEM-EDX is used for particle characterization. Although equivalent equipment could be used, for the method outlined in this paper the same equipment and software, described above, was utilized. The developed method utilizes images obtained from a secondary electron (SE) detector for physical characterization of the dust particles (i.e., to measure dimensions and qualitatively evaluate particle angularity), and EDX spectra for compositional analysis. In order to select particles for characterization without bias, a rigorous routine was developed to navigate the prepared 9 mm diameter filter sub-sections under the SEM. The routine was developed using an iterative process, whereby over 700 particles in total from the lab-generated dust samples were interrogated for elemental composition, long and intermediate dimensions and estimated shape factors (all described in detail below). With each iteration of analysis, the routine was improved until nearly all particles encountered could be quickly classified into one of the pre-determined compositional categories described above using the EDX spectra, and raw size and shape data could be efficiently gathered for later computational analysis. It is important to note that this routine was developed based on the assumption that somewhere between 50 and 150 particles would be analyzed per dust sample, with fewer particles limiting the statistical power of results and more particles limiting practicality due to time requirements. During preliminary verification of the dust characterization method, a simple evaluation of the effect of number of particles analyzed (i.e., statistical sample size) on resulting compositional distribution was conducted (see below). Ultimately, it was determined that analyzing 100 particles per sample provided enough information about the sample while maintaining reasonable analytical time requirements (i.e., about 75–90 min per sample). A detailed description of the particle selection and analysis routine follows. First, the SEM should be focused at a magnification of 10,000×, which will allow for analysis of particles within the desired size range (i.e., about 0.5–8 µm); a somewhat higher magnification could be used if the particle size distribution is relatively small (i.e., there are few large particles), but significantly lower magnification will prohibit adequate resolution for analysis of finer particles. With the line measurement tool, two horizontal lines are then drawn 2 μm apart and spanning the entire width of the screen, such that the space between the lines is centered on the screen (Figure 4). The SEM is then
Virginia Tech
Resources 2015, 4 949 positioned such that the dust characterization will begin in the top left-hand portion of the prepared filter subsection, approximately three screen shifts from its outer edge and approximately 2.25 mm from the top (i.e., one quarter of the diameter) (Figure 5). Three screen shifts from the edge of the filter prevents analysis of any particles disturbed during the filter sub-sectioning process. Additionally, the placement of the SEM stub inside the instrument determines the orientation of the “top” of the stub, based on the upper border of the screen. Figure 4. Example of particle selection and screen shifting via the joystick. The image on the left illustrates analysis of particles intersecting between the two lines in the center of the screen at 10,000× magnification. The image on the right, at 2500× magnification, shows four screens, each outlined in a white dotted line, where analysis (at 10,000× magnification) will take place consecutively. Figure 5. Illustration of 9 mm diameter filter sub-section and navigation routing for SEM-EDX analysis. The image on the left is the whole 37 mm diameter filter and the image on the right depicts the sub-section removed for analysis. The box in the top, left corner of the filter sub-section illustrates the first frame (i.e., field of view) in which particles should be selected for characterization; the black arrows in the filter sub-section define the directions for successive screen shifts between characterization frames. When one horizontal line of analysis is complete (black arrow directions), the red arrows define shifting back to the left side of the filter to continue analysis. Once the instrument is focused and initially positioned, selection and analysis of dust particles can begin. Moving from left to right on the screen, each particle with L > 0.5 μm that intersects the space
Virginia Tech
Resources 2015, 4 950 between the two horizontal lines and falls completely within the field of view should be selected for analysis; if no particles in the field of view fit these criteria, the next field to the right can be examined, and so on (see below.) Particles with L < 0.5 μm are too small to produce quality spectra results—if analysis of smaller particles is critical, transmission electron microscopy (TEM) would be better suited for this application [27]. In an effort to analyze more of the filter area, in regards to high dust density samples, a maximum of 10 particles (i.e., the first 10 that meet the above criteria moving from left to right) per field of view should be analyzed. This would allow a minimum of 10 fields of view in order to characterize 100 particles. At the approximate center of each particle selected for analysis, the “spot” (or analogous) analysis function on the SEM software can be used with in conjunction with the EDX software to generate elemental spectra. Based on the rules outlined in Table 1, the particle can be classified into one of the seven compositional categories. Additionally, the L and I dimensions of each selected particle should be measured using the built-in line measurement tools in the SEM software. Finally, angularity should qualitatively be classified into one of the three categories described above (Figure 3). After recording raw data (i.e., L, I, angularity, and composition), the user can proceed to the next particle selected for analysis. Once all eligible particles (i.e., based on the criteria above) in the current field of view have been analyzed, the user should proceed to the next field of view (i.e., moving to right per Figures 4 and 5) for selection and analysis of more particles. The above steps should be followed until analysis reaches the right-hand side of the filter subsection, approximately three screen shifts from its edge, or until 100 particles have been analyzed, whichever comes first. If 100 particles have not yet been analyzed, the user should navigate back to the left side of the filter subsection (see top red arrow in Figure 5), and reposition the sample such that the field of view is approximately three screen shifts from the outer edge of the filter subsection and approximately 4.5 mm from the top (i.e., half of the diameter). From this position, particles should again be selected for analysis by scanning from left to right within the current field of view and adhering to the criteria outlined above; then, analysis should proceed to the next field of view. If the user again reaches the right side of the filter subsection before 100 particles are analyzed, the SEM can be repositioned back to the left—this time approximately three screen shifts from the left edge of the filter subsection and approximately 6.75 mm from the top (i.e., three-fourths of the diameter). Particle selection and analysis should proceed as before. 2.3. Automated Analysis Program To automate analysis of the raw data collected from SEM images and EDX spectra, a spreadsheet program was also developed using Microsoft Excel 2010 (Microsoft, Redmond, WA, USA). For each dust particle, the user inputs the compositional classification (i.e., per Table 1), measured dimensions (L and I), and qualitative angularity classification (i.e., r, t, or a), and the program then computes the following characteristic quantities based on the assigned R and ρ values for each compositional category and Equations 1-6: short dimension (S), maximum projection sphericity (Ψ ), cross-sectional diameter (d ), p c spherical diameter (d), volume (V), and mass (m). Subsequently, distributions of composition, size (i.e., d), s s and angularity (either by particle number or mass) can be automatically generated for each dust sample. While composition and angularity classifications are inherently categorical (i.e., each particle has been
Virginia Tech
Resources 2015, 4 951 placed into a specific composition or angularity category by the SEM-EDX user), particle size is continuous (i.e., the computed spherical diameter is numeric quantity.) Thus, to generate distributions of quantities based on particle dimensions, a number of size categories (or classes) was defined; for this, a logarithmic base-2 scale was uses, which is a common approach used to classify particles based on work done by Wentworth (1922) [28]. Here, the automated program considers a total of nine size classes from >0.125 μm to >16 μm. The spreadsheet program additionally includes input cells for general sample information (e.g., sample name or number, description of collection location or conditions, total filter area and filter sub-section area, total number of particles characterized, total linear length of filter analyzed), and provides basic output based on that information (e.g., percent of total filter analyzed, approximated particle density on the sub-section by mass or number). A number of graphical representations of the data results are also generated for each sample. 3. Preliminary Verification of Developed Characterization Method In order to provide some preliminary verification of the characterization method developed for coal mine dust by SEM-EDX, three field samples were collected and analyzed according to the guidelines outlined above. In particular, the objectives were to: (1) verify that analysis of 50–150 particles per sample is sufficient to describe the compositional, size, and shape distributions on the filter sub-sections; and (2) verify that the six defined compositional categories using the lab-generated dust samples from ROM material, and rules for classification of particles into each category do, indeed, allow characterization of the majority of particles from real field samples (i.e., do most particles fit into one of these categories, or are many particles being classified as “other”?) It should be noted that the question of particle distribution was briefly addressed in Sellaro and Sarver, 2014. In summary, particle quantification was completed on four different areas (at 2500× magnification) of filter sub-sections from 17 field samples; this involved counting all particles with L dimensions >0.5 μm in each of the four areas, which were each located in a different quadrants of the filter sub-section. Particle counts were determined to be similar (i.e., based on a 95% confidence interval) between each of the four areas for all but two samples. These specific filters had one quantification area with many agglomerated particles, as opposed to few, separate particles, viewed on the other three quantification areas. The agglomeration in these samples is thought to be due to humidity throughout the intake airway of the mine, where both were collected [29]. 3.1. Materials Three dust samples used for method verification were collected from the same underground coal mine where the ROM sample used for method development originated. An Escort ELF pump with a Dorr-Oliver cyclone was used to collect the samples onto 37 mm PC filters, and each sample was collected over a period of about 120 min. The first sample, “Roof Bolter”, was collected from a location adjacent to a roof bolting machine, and thus was expected to contain relatively high proportions of alumino-silicate, and possibly quartz particles (vs. other compositions), due to the drilling activity of the machine into roof material. The second sample, “Belt Drive”, was collected from a location just above a belt drive, where coal and rock were being transported below on a conveyor belt. The “Belt Drive” sample was
Virginia Tech
Resources 2015, 4 952 anticipated to include greater proportions of carbonaceous particles, and some carbonate particles were also expected due to heavy rock dusting in the belt entries. (Rock dusting is a practice used to limit propagation of coal dust explosions, and requires walls and floors to be covered with fine inert material such as CaCO ). The third sample, “Intake”, was collected from a location near the working section of 3 the mine in intake air (i.e., fresh air being delivered to the mine by its ventilation system). The “Intake” sample was expected to have relatively similar proportions of carbonaceous and alumino-silicate particles, with some carbonate due to rock dusting in the area. Estimated particle densities on the Roof Bolter, Belt Drive, and Intake filter sub-sections were 16,292 particles/mm2, 12,639 particles/mm2, and 1850 particles/mm2, respectively. These densities were extrapolated from the average number of particles counted in four different areas on each sub-section; each area was 10,404 μm2 and located in a different quadrant of the sub-section areas. Figure 6 displays SEM images for each sample. Figure 6. SEM images at 2500× magnification for the filter sub-sections from each verification sample showing relative particle densities. The far left image represents the “Roof Bolter”, followed by the “Belt Drive” image, and finally the “Intake” image on the right. 3.2. Results and Discussion To evaluate the effects of number of particles analyzed (n) on dust sample characterization results, compositional distributions by particle number and mass were compared for a range of n values (Table 2). For the Roof Bolter and Belt Drive samples, 200 particles in total were analyzed, and the resultant compositional distributions were compared for the first 25, 50, 100, 150, and 200 particles (i.e., n = 25, 50, 100, 150 or 200); for the Intake sample, only 100 particles were analyzed in total, so n values of 25, 50, and 100 were compared. Somewhat surprisingly, when comparing compositional distribution of particles by number, all samples showed relatively similar results across all n values—meaning that even when n was increased 4- or 8-fold, little change was observed in the relative number of particles being classified into each compositional category. When comparing compositional distribution by mass, however, only the Belt Drive sample produced similar results across all n values. For the other two samples, as n increased, the distributions changed significantly. For example, in the Roof Bolter sample, the first 100 particles analyzed showed very little carbonaceous material on a mass basis, but first 150 particles analyzed showed that over a quarter of the mass was due to carbonaceous particles. This particular discrepancy was traced to a single very large
Virginia Tech
Resources 2015, 4 954 magnification to assess density of large particles, or elemental mapping at a relatively lower magnification to assess compositional differences in larger particles). Additionally, from Table 2, it appears that the six pre-determined compositional categories, and rules outlined in Table 1 for particle classification, can account for most respirable particles in expected in dust samples from underground coal mines. Indeed, of the 500 total particles analyzed across all three samples, none required classification into the “other” category. It should of course be noted that dust composition could vary with varying coal and rock geologies, and mining and operational practices— and, thus, between mines. So, further verification of the developed method for dust characterization should certainly be conducted using samples collected from multiple mines/regions of interest. To demonstrate the robustness of the developed dust characterization method, size, and compositional distributions (again by particle number and mass) for the three verification samples were generated by the automated spreadsheet program. Figure 7 shows the results for the sample collected adjacent to a roof bolter. With respect to composition, the sample largely consists of alumino-silicates, with significant coal and mixed carbonaceous particles too. These results are consistent with expectations based on the sampling location (i.e., the bolter was drilling into the roof, but the air being moved through the mine also contains coal particles). These results additionally underscore the influence that large particles can have on mass-based data. Figure 7 indicates that 1% of the particles in this sample, which all happened to be carbonaceous, fell into the 4–8 μm size class—but these make up 19% of the total mass. Figure 8 shows the relative angularity of particles in the “Intake” sample. This data indicates that alumino-silicates and mixed carbonaceous particles tend to be primarily angular, while carbonaceous particles can be more rounded. (a) (b) Figure 7. Particle size distribution by number (a) and by mass (b) for the Roof Bolter sample; the relative number of particles in each compositional category is shown within each bar.
Virginia Tech
Resources 2015, 4 955 Figure 8. Particle compositional distribution by number for the Intake sample; the relative number of particles classified as having angular, transitional, or rounded shapes is shown within each bar. 4. Conclusions SEM-EDX is a powerful tool, which can be used for particle-level analysis of dust samples. This paper describes a standard methodology developed for the purpose of achieving more comprehensive characterization of respirable dusts in underground coal mines. Due to the large amounts of data that can be generated by this method, a relatively simple spreadsheet program is recommended for automating computational analyses to compare particles within and between dust samples. The recent availability of automated particle analysis instrumentation to existing scanning electron microscopes could also provide an even more robust analysis capability by increase the number of particles analyzed by at least five to ten fold. Future work should be geared toward further understanding particle uniformity, by both number and size of particles, across the entire filter area and uniformity by particle size across the filter sub-section. In cases of non-uniformity, such as agglomerated dust, characterization of >100 particles may be necessary. The method is also user specific, and the steps outlined above are at the interpretation of the user, such as in cases of exceptionally high dust density samples and increased numbers of large dust particles. Although the method outlined in this paper was shown to classify particles properly from one specific mine, to accommodate a mine of different mineralogy, the particle dust categories should be altered prior to particle classification. The time required for this type of comprehensive analysis can be a major drawback; however, the use of a standard methodology may increase analytical efficiency, as well as consistency. Acknowledgments The authors would like to thank the Department of Mining and Minerals Engineering of Virginia Tech and VT NCFL for assistance with this project.
Virginia Tech
15th North American Mine Ventilation Symposium, 2015 — Sarver, E., Schafrik, S., Jong, E., Luxbacher, K. © 2015, Virginia Tech Department of Mining and Minerals Engineering Considerations for an Automated SEM-EDX Routine for Characterizing Respirable Coal Mine Dust Victoria Anne Johanna, Emily Allyn Sarvera aVirginia Tech, Blacksburg, Virginia, USA Respirable dust in coal mining environments has long been a concern for occupational health. Over the past several decades, much effort has been devoted to reducing dust exposures in these environments, and rates of coal workers’ pneumoconiosis (CWP) have dropped significantly. However, in some regions, including parts of Central Appalachia it appears that incidence of CWP has recently been on the rise. This trend is yet unexplained, but a possible factor might be changes in specific dust characteristics, such as particle composition, size or shape. Prior work in our research group has developed a standardized methodology for analyzing coal mine dust particles on polycarbonate filter media using scanning electron microscopy with energy dispersive x-ray (SEM- EDX). While the method allows individual particles to be characterized, it is very time-intensive because the instrument user must interrogate each particle manually; this limits the number of particles that can practically be characterized per sample. Moreover, results may be somewhat user-dependent since classification of particle composition involves some interpretation of EDX spectra. To overcome these problems, we aim to automate the current SEM-EDX method. The ability to analyze more particles without user bias should increase reproducibility of results as well as statistical confidence (i.e., in applying characteristics of the analyzed particles to the entire dust sample.) Some challenges do exist in creating an automated routine, which are primarily related to ensuring that the available software is programmed to differentiate individual particles from anomalies on the sample filter media, select and measure an appropriate number of particles across a sufficient surface area of the filter, and classify particle compositions similarly to a trained SEM-EDX user following a manual method. This paper discusses the benefits and challenges of an automated routine for coal mine dust characterization, and progress to date toward this effort. Keywords: Coal workers’ pneumoconiosis, Respirable dust, particle analysis, scanning electron microscopy, Automated SEM 1. Introduction Automated SEM-EDX analysis has historically been used for applications such as industrial process control Coal mining operations generate dust which can be and forensics [4]. However, SEM automated analysis respired into the lungs of workers to cause occupational hardware and software advancements have made it health diseases such as coal workers’ pneumoconiosis applicable to a variety of other applications, including (CWP). The mining industry saw dramatic reductions in mineral samples [4]. Automated SEM is able to analyze CWP cases as dust standards and ventilation regulations features such as inclusions in metals; porosity of in underground coal have improved over the past few geological samples, and samples containing wear debris decades under the Federal Coal Mine Health and Safety from combustion engines [4]. Another application for Act of 1969 [1]. The Act also established the Coal mineral samples is the detection of an anomalous Workers’ Health Surveillance Program through which particle within a grouping of thousands of particles of NIOSH has witnessed first-hand the increase in CWP other compositions [4]. This application might be rates in the eastern United States, particularly Central particularly useful to the occupational health field for the Appalachia, since the mid-1990s [1-3]. This is of analysis of dust samples containing atypical or particular concern because the majority of cases have hazardous particles. been reported in young coal miners and many of the cases are advanced [1-2]. Further research should be Some work has been conducted in the realm of aimed toward determining the cause of increased automated dust particle analysis. Deboudt et al. [5] incidence of CWP in Central Appalachia in order to performed automated SEM-EDX particle analysis on dust improve miner health and safety [3]. Little is definitively samples collected on the Atlantic coast of Africa. As in known regarding the effects of specific dust our project, this group collected airborne particulate characteristics (such as size, shape, and chemical samples on polycarbonate filters and ran an SEM at an composition) on lung disease occurrences in accelerating voltage of 15kV. Using the Link ISIS Series underground miners. Analyzing these dust particle 300 Microanalysis system developed by Oxford characteristics using scanning electron microscopy Instruments, this group was able to collect spectral data (SEM) may be a good place to start. Automated SEM for individual particles with a 20 second acquisition time analysis could be particularly advantageous in collecting [5]. Even faster rates of data collection can be achieved data from more dust samples at a faster rate. though. Ritchie and Filip [6] recently undertook an effort _______________________________________________________________________________ 1 author’s email: [email protected]
Virginia Tech
to optimize the speed of automated particle analysis by range of 0.5-8.0μm in diameter. Two horizontal lines are SEM-EDX and demonstrated data collection at drawn 2μm apart, centered on the screen and spanning approximately three particles per second. They the width of the screen, using a line measurement tool, employed a structured query language database that as depicted in Figure 1. stores millions of particle records and is able to simultaneously classify multiple particles to multiple categories [6]. The authors’ research group does not necessarily need to be analyzing particles at that rate, though aiming to speed up analysis time by a few seconds and perhaps creating a comprehensive particle database could be beneficial toward research efforts. Other researchers have also worked on multi-frame particle analysis using the SEM. Fritz, Camus, and 2 μm Rohde [7] have done work with automated microscope stage analysis that can cover hundreds of frames in one run to ensure total sample coverage. For applications in the mining industry, this ability is particularly attractive for particle sizing and the analysis of respirable dust particles. Collecting data for particles over the entire sample can be necessary for statistical significance [7]. The authors’ research group also plans to employ automated multi-frame analysis. Fig. 1. Example of the horizontal lines drawn 2 μm apart for particle selection. Only particles touching this region are to be selected for analysis [8]. 2. Previously Developed Standard Dust Characterization Method Prior work in the authors’ research group has The stage is moved so that the first field to be developed a standardized methodology for analyzing analyzed is three screen shifts from the outer, left edge coal mine dust particles on a polycarbonate filter [8-10]. of the filter, 2.25mm (one quarter of the filter diameter) This method was developed using an FEI Quanta 600 down from the top of the filter. Moving from left to right FEG environmental scanning electron microscope and top to bottom each particle with a long dimension (ESEM) (FEI, Hillsboro, OR) equipped with a Bruker greater than 0.5μm intersecting the space between the Quantax 400 EDX spectroscope (Bruker, Ewing, NJ). two horizontal lines and falling completely within the The ESEM is operated under high vacuum conditions at field of view is analyzed. Up to ten particles meeting the a voltage of 15kV with a spot size of 5.0μm and at the specifications are characterized per field in order to optimal working distance of 12-13mm. Bruker Esprit ensure that at least ten fields are analyzed and increase software is used to collect spectra results for the the representativeness of the results. Figure 2 shows a classification of individual particles. The “spot” analysis backscatter detector image of a typical field of ten function of the ESEM software is used in conjunction particles that would be analyzed. with the EDX software to generate elemental spectra. Six compositional classification schemes were developed for coal mine dust particles based on peak elemental spectra heights of aluminum, calcium, carbon, copper, iron, magnesium, oxygen, potassium, silicon, sodium, sulfur, and titanium. The six classifications are “alumino-silicate,” “carbonaceous,” “carbonate,” “heavy mineral,” “mixed carbonaceous,” and “quartz.” Any particles that do not fit into these categories are termed “other.” Data on particle size and shape is also collected in this dust characterization method. The long and intermediate dimensions of each particle are measured using the line measurement tool provided in the ESEM imaging software. The shape is qualitatively classified based on user interpretation as either “angular,” “rounded,” or “transitional.” The sample analysis routine begins by focusing the SEM at 10,000x magnification to provide optimal Fig. 2. Example of a typical field of particles to be analyzed resolution for analyzing particles in the desired size (at 10,000x magnification). 2
Virginia Tech
Once the first field has been analyzed, the next field backscatter detector image of a typical field of particles of view to the right is analyzed. If fewer than 100 that would be analyzed using the automated routine. particles have been analyzed upon reaching the edge of the first row, the stage is shifted so that the field of view is 4.5mm (one half of the filter diameter) from the top of the filter and the same procedure for the previous row is followed. If fewer than 100 particles have been analyzed upon reaching the edge of the second row, the stage is shifted once more so that the field of view is 6.75mm (three quarters of the filter diameter) from the top of the filter, following the previous procedure. Figure 3 depicts this analysis routine in terms of filter navigation under the SEM. Fig. 4. Example of a typical field of particles (at 1,000x magnification). This image comes from the same sample as Figure 2; however, by being able to analyze particles at 1,000x versus 10,000x magnification, many more particles can be analyzed per frame. Once the first frame is selected, the imaging tool in the Esprit software is used to pull the image of the frame from the SEM software and import it for analysis. A special feature in Esprit allows for rules Fig. 3. Illustration of a 9 mm diameter polycarbonate filter and filters to be applied to the image so that the software and navigation routing for SEM-EDX analysis. The box is programmed to identify dust particles. Here, a binary represents the first frame in which particles are selected for image can be created and settings can be adjusted so that characterization; the black arrows define the directions for successive screen shifts between characterization frames. the software distinguishes particles as white and the When one horizontal line of analysis is complete (black filter as black. Figure 5 depicts the binary imaging arrows), the red arrows define shifting back to the left side of process in Esprit. the filter to continue analysis on the next horizontal line [8]. The manual method has the capacity to analyze 100 particles on one sample in 75-90 minutes, depending on user experience and sample characteristics (e.g., particle density); despite the wealth of information that can be obtained, the method is clearly too time-consuming to be practical for a large number of samples. 3. Automation of the Standard Dust Characterization Method Considering the need to significantly speed up particle characterization, efforts to automate the above routine have recently been initiated. This work is being developed using the same ESEM-EDX system previously mentioned, and several special features Fig. 5. Example of a binary image of a typical particle field (at available for add-on to Bruker’s Esprit software. A 1,000x magnification). This is the same particle field as in major benefit to automation is that the software can Figure 4. characterize particles at a magnification that is ten times lower than the magnification required for the standard dust characterization method. Figure 4 shows a 3
Virginia Tech
Once all settings have been adjusted for particle weight percentages of elements other than carbon and identification, particle sizing can be conducted. Figure 6 oxygen. This does not allow them to be classified under depicts the particle field once all the particles have been the original carbonaceous category that only sets sized. The software outputs data such as length, width, parameters for carbon and oxygen weight percentages. and shape factor to characterize the particles by size and Therefore, in this preliminary work, a second shape, as shown in Figure 7. carbonaceous classification category was created with additional elemental weight percentage rules (i.e., This single frame contains 171 particles that can be “carbonaceous II” in Figure 9). By doing this, analyzed, 71 more particles than would have been carbonaceous particles that were previously not analyzed in up to ten frames using the standard dust classified are classified in the second carbonaceous characterization method. This particle sizing routine category. A particle chemistry analysis feature in the minimizes user interpretation of the long and Esprit software can classify each particle detected in the intermediate particle dimensions which was required for frame. the standard dust characterization method. Once sizing is completed, the particle classification scheme can be implemented. The Esprit software allows for particle classification based on the weight percent of elements detected in spectral analysis. Therefore, rules can be set for the maximum or minimum elemental weight percentages required for various particle classification categories. We have currently developed rudimentary rules and particle classification categories to demonstrate the utility of automated analysis and its potential for respirable dust particles from coal mining environments. The preliminary categories are based on typical elemental weight percentages observed for particles classified using the manual method. Figure 8 displays chemistry results for some particles identified in Figure 6, showing the weight percentages of specific elements considered by the classification rules. Figure 9 shows particle classification results in a bar chart as a useful visual tool. Fig. 6. Example of the particle sizing process in Esprit. This is It should be noted that the chemistry classification the same particle field as in Figure 4. Each particle found that is accepted for analysis is outlined in blue while undergoing categories are not currently developed enough to size classification. accurately classify every particle (i.e., as it would be classified manually). This is especially true for carbonaceous particles because they can contain small Fig. 7. Example of the particle sizing results for some particles shown in Figures 4-6. The accepted particles are all numbered and listed in order in the first column. Other particle properties are provided for each particle in subsequent columns. At the bottom of the results page, minimum, maximum, average, and standard deviation values are provided for each particle property. 4
Virginia Tech
The automated analysis at the particle density mineral identification and other key elements be used to demonstrated in Figure 4 takes just over ten seconds per classify these particles in its place. We have determined particle which allows for total analysis of this frame of that gold and palladium should definitely be 171 particles in approximately 30 minutes. At this rate, deconvoluted before spectral analysis because the 171 dust particles are being analyzed two or three times sample coating is comprised of these elements and can faster than just 100 dust particles using the standard dust interfere with the chemical identification of the dust characterization method. It is expected that with several particles. modifications in basic operating parameters of the SEM- EDX system, the efficiency can be greatly increased. The majority of particles in this frame were classified as 5. Conclusion carbonaceous while many particles were classified as There is much work still to be done to refine the alumino-silicates and fewer were classified as heavy particle classification parameters in order to properly minerals, carbonates, mixed carbonaceous, and quartz. automate the particle analysis. Our goal is to determine Another special feature in the Esprit software allows and implement the correct particle classification for the automation of the microscope stage movement categories and their appropriate rules for our samples. To from frame to frame. The user is able to designate the do this, we plan to: starting frame position and the ending frame position  collect spectral data on many particles from our along the sample filter. Once the entire area of the filter samples to be analyzed is determined, particle sizing and chemistry can be run frame by frame and the software  determine the appropriate elemental weight will export the data after completion. This tool can even percentage thresholds for the classification be used to run multiple samples consecutively so that up parameters to 16 samples could be analyzed in one run.  determine which elements should be deconvoluted  modify the currently developed particle classification categories so that they are able to classify all particles 4. Discussion encountered  ensure that the software is properly classifying all It seems as though developing an automated particle particles analysis routine is a step in the right direction for our research. The standard dust characterization method is We aim to program the software to classify particles in too time-intensive for the amount of particles that can the same manner that the user would classify them, but practically be analyzed per sample because the user must without the human error. interrogate each particle manually. A major advantage to automated particle analysis is the amount of time saved in the lab due to quick, electronic characterization of Acknowledgments particles. This is also applied to data entry where The authors acknowledge the Alpha Foundation for automated analysis automatically exports data into the Improvement of Mine Safety and Health for funding Microsoft Excel while data obtained from the standard this work. We would also like to thank Steve McCartney method must be manually entered. Moreover, results of Virginia Tech ICTAS-NCFL for operational from the standard dust characterization method may be assistance with SEM-EDX analysis, Ted Juzwak of somewhat user-dependent since classification of particle Bruker Corporation for assistance in learning the Esprit composition and shape currently involves some software capabilities, and Patrick Wynne for countless interpretation of EDX spectra. Other benefits of an hours of SEM work. We are also grateful to Meredith automated particle analysis routine are a significant Scaggs for her efforts to collect and prepare dust increase in the number of particles than can be analyzed samples. per sample and minimization of user interpretation to acquire results. The ability to analyze more particles without user bias should increase reproducibility of References results as well as statistical confidence in obtaining results from a representative portion of the sample. However, some challenges do exist in creating an [1] Centers for Disease Control (CDC). Advanced automated routine, including training the available Cases of Coal Workers’ Pneumoconiosis-Two software to appropriately make multiple decisions such Counties, Virginia. MMWR, 55.33 (2006) 909- as those involving differentiation of individual particles 913. from anomalies on the filter media, selection of particles [2] D. Blackley, C. Halldin, and A.S. Laney, for analysis, and classification of particle composition. Resurgence of a Debilitating and Entirely Another challenge arises due to the filter media being Preventable Respiratory Disease among Working comprised of carbon and carbon being a key element in Coal Miners, American Journal of Respiratory the classification of carbonaceous, mixed carbonaceous, and Critical Care Medicine 190.6 (2014) 708-709. and carbonate minerals. We are working to determine whether or not carbon should be deconvoluted for proper
Virginia Tech
Design of a Mine Roof Strata Analysis Device Andrew James Reksten Russell ABSTRACT Because the roof lithology in an underground coal mine is typically variable and poorly known, the safety and efficiency of these mines is reduced. To address this shortcoming, a device for analyzing rock properties by way of scratching a mine roof borehole was designed and tested in multiple different media with the goal of determining in situ mine roof properties with a nondestructive technique. Tools were developed for measuring extraction force and position of the scratching mechanism and those values were compared versus time for multiple tests to look for changes in applied force over changing positions. Because of signal stability and inconsistencies in scratch depths the data were found to contain too much variation to determine any rock properties or changing rock conditions from the simulated roof material in the concrete block. However, further scratch tests in a sandstone block indicated that increasing the diameter of the wire scratchers (and therefore increasing their stiffness and accompanying normal force) from 0.045 inches to 0.055 inches increased the average pull force from 6.24 to 9.96 lbs. Similar to that test, a scratch test was performed in a PVC pipe where it was found that increasing the scratcher diameter from 0.045 inches to 0.051 inches increased the pull force from a 2.81 lb average to a 36.46 lb average, with considerably better gouging of the host material.
Virginia Tech
ACKNOWLEDGEMENTS I would like to thank my advisor, Dr. Erik C. Westman, for the incredible opportunity to be a graduate student in this program and for his perpetual support for me and my work. This project was funded by the National Institute for Occupational Safety and Health (NIOSH) under Contract 200-2011-40313 for “New Technologies for Identifying and Understanding Ground Stability Hazards”. I would like to thank NIOSH for the funding to conduct this research I thank Dr. Mario Karfakis for his consistent insight into my research and for helping me develop my ideas and for access to his lab resources. I thank Dr. Kray Luxbacher for taking time out of her schedule to be on my committee. I also would like to show gratitude to the entire Mining and Minerals Engineering faculty and staff for their help and guidance. I thank Ben Fahrman and Brent Slaker for their constant guidance and support. I would like to thank Yuncong Teng and Ben Owsley for their assistance in the field. I would like to thank Joseph Amante for his assistance with the lab equipment and Mike Kiser for his general help. I would like to thank the other members of the ground control research group for their input: Kyle Brashear, Billy Thomas, Xu Ma, Enji Sun and Will Conrad. I also thank every other graduate student in our program for their help and support. Thanks to Amritpal Gill and Jacques Delport for their help and resources for the electronic components of my research. Thank you Jim Waddell for constructing the devices used in my project as well as Robert Bratton for generously assisting with my research and nurturing my interest in mining and mining technology. Thank you to J. H. Fletcher & Co. for allowing me to use their facility, resources and employees to conduct my testing And lastly, a huge thanks to my loving friends and family for everything they do for me and for believing in me. Thank you to Scott Sr. and Jacqualine Russell for your love, support and sacrifice on my behalf. Thank you to my parents, Scott Jr. and Donna Russell for their support and investment in my success. iii
Virginia Tech
Introduction Mining has served the societies and economies of the world for hundreds of years by providing jobs, resources and technology that have far reaching benefits. It is unlikely that mining will ever not have a place in the world considering the growing rate with which humans require valuable minerals and coal. Mining will continue to pose risks from use of vehicles, electricity, powered haulage, as well as the hazards of fires, explosions, and gound falls. Considering the nature of excavating rock masses and subjecting the rock material to physical and chemical changes, the risk of ground related incidents is likely to exist as long as underground structures are excavated. According to the National Institute for Occupational Safety and Health “Mining Topic: Ground Control Overview”, almost 40% of fatalities that occurred underground coal mining between 1999 and 2008 and were the result of stability failure in the face, roof or rib (2012). According to the same source, the fall of rocks between roof supports injures between 400 and 500 miners per year (“Mining Topic: Ground Control Overview”, 2012). It has been argued that ground control will be more technically challenging as a result of the need to develop mines in deeper areas or areas with more difficult conditions in order to combat dwindling resources. The study of ground behavior in mining has brought to light a number of topics that require further investigation. Some of these topics, such as large scale modeling, pillar bump analysis, support optimization and subsidence are given considerable attention and research. There seems, however, to be a lack of investigation in the ability to obtain strength characteristics of mine roof strata at the face and along the panel. Possible prohibitive factors for further investigation in this field may be the hurdles of intrinsically safe equipment certification 1
Virginia Tech
as well as a distrust in the ability to make good measurements in such a heterogeneous material. This study covers the design and use of a tool that explores the task of analyzing and categorizing mine roof. The gathering of information about a rock mass requires the investigator to impart some type of energy into the rock and then analyze the response of the rock. Several tests exist that do this in a lab setting, such as Schmidt hammers which use rebound characteristics of a metal rod on rock masses to determine rock traits. There also exist primary and shear wave tests that use the transmission of waves through a rock mass to gather information about its properties. Most notably, destructive test methods such as the Uniaxial Compression and point load tests or the Brazilian test relay valuable information about rock behavior. Recently, several highly instrumented and controlled testing methods have been developed that provide details about the strength of rock masses in the lab setting without the use of destructive testing methods. It is hopeful that one of these non-destructive, rock-surface implemented tests could prove to be a means to gather strata properties in a mine setting. The current method of taking a core sample at a mine and coupling it with laboratory analysis is simply too expensive to be able to be considered a suitable roof strength analysis method. Primarily, rock core analysis lacks enough resolution across a property to make widespread entry or panel scale judgments on ground conditions because so few cores are logged on account of their cost. Core analysis also lacks the ability to indicate the changes to the immediate roof strata that would be induced by excavating a room and subjecting the strata to gravity loading. In light of these issues of resolution and cost prohibitive factors, a cheaper, easier and more widespread testing method is needed. 2
Virginia Tech
One of the current methods of analyzing the competence of mine roof mid-shift entails a ground inspector running the tip of a tape measure (the off-the-shelf variety that can be found at any hardware store) up the length of the legally required test holes. Test holes in this case refers to the required empty drilled holes that extend 1 foot past the deepest bolt depth in the roof [30 C.F.R. §75.204 (f)(2)]. As the tape measure tip is run along the length of the hole, the bent metal tip of the tape is expected to nest itself in any discontinuities that the roof material may have. Additionally, the inability to extend the tape fully up the hole indicates an obstruction in the roof hole such as shifted roof layers. This test is described in the interactive training lesson for roof bolters (MSHA - Interactive Training - JTA Spiders – Roof Bolter Operator). This tape measure test serves two functions, it indicates the presence of any such discontinuities and bed separations as well as providing an indication of the distance up the hole that the discontinuities exist. While being a good inexpensive test method, this fails to properly extract all the useful information that may be contained in one of these test holes. Another method of analyzing the composition of the roof is less desirable but more informative. Ground control experts can learn about the roof by looking in areas where a large scale collapse has already occurred. If they can inspect the roof cavity where the rock fell from, they can often see fracture networks that are difficult to interpret by looking only at the skin of the mine roof. Furthermore, this can be a good way to look at the stratigraphy of the immediate roof and see if there are any obvious weaknesses in the roof layers. Depending on the conditions of the roof fall, conclusions about the anchorage characteristics can be drawn. For instance, if the grouting or anchor shell of the bolt is visible in the fallen roof material then it is likely that there was some kind of slippage or loss of anchorage integrity. It also allows the damage inspector to look more closely at the qualities of the strata that the anchorage mechanism is attached to and if 3
Virginia Tech
that rock mass has the strength characteristics that it was assumed to have when the bolt was first installed. In general, the practice of in-cycle roof analysis is often left up to the experience of the bolters. They are the individuals that have the most information about how the drills behave in the roof strata and if there are any conclusions that can be drawn from the drill behavior about possible roof composition and competence. Stewart, et al. outline a roof control program in their 2006 paper that explains a coal mine’s use of “Lith-Graphs”, which are qualitative forms filled out by bolters as each test hole is drilled. These forms have spaces where information about bit wear, water presence and possible voids are recorded and then provided to the geologists and engineers (Stewart, et al., 2006). Foremen and geologists are encouraged to introduce supplementary support in areas where the roof is troublesome on the basis of information found in these “Lith-Graphs” as well as from discussion with the bolters and observations of roof conditions (Stewart, et al., 2006). The inclusion of information from a quantitative roof strength evaluation test, i.e. the one outlined in this paper, would certainly help compliment and justify some of the roof control decisions that ground control experts, foremen and mine workers will be making about the presence and degree of installed auxiliary support. The goal of this project is that attention can be given to the apparent lack of panel-scale or entry-scale quantitative ground analysis tools. Prior literature is explored to see what kind of tests exist that could be readapted to bridge this knowledge rift, or at the very least, inspire additional investigation into that technique. The literature review should serve as a good reference for anybody that hopes to find new ways to measure rock properties that go beyond the traditional destruction methods such as, but not limited to Uniaxial Compressive Strength (UCS), 4
Virginia Tech
Point Load, and the Brazilian Tensile Test. The creation of a device that evaluates the selected quantitative analysis method(s) for feasibility in a mining setting is also expected. This device is to be used by people with little to no assistance from powered machinery except for simple transportation of the unit and should therefore be portable. The device should be useful in areas that are out of the mining cycle i.e. not modifying or adding to existing equipment. Ease of use, portability and accuracy will encourage mines to use such a device instead of avoiding it when the information gained from it is not worth the loss in productivity. Above all else, this device should not diminish the stability of the current mine ground or interfere with ventilation in any way and should put the operator at little risk of injury. Ideally, this device will inspire others to continue research in the area of underground strata characterization and hopefully provide enough technical information to advance the ability of other researchers to develop their own devices or expand upon this one. The principle aspect of the design is to control the motion of a device that interacts with the walls of the borehole. The control of the motion allows each hole to be analyzed relative to itself on account of the similar motion characteristics. Through the interactions with the borehole wall, changes in forces applied to the device are expected in the presence of differing rock types. By looking closely at these force changes and where their changes occur in the borehole, an understanding of the strength and position of constituent rock layers can be gained. The device created for this project addresses the position and removal force of a scratching mechanism in a borehole and monitors those parameters in an effort to extract changes in rock type and properties. 5
Virginia Tech
Literature Review Geologic Background Coal seams are notable for their deposition mechanisms and for the way that these mechanisms ultimately define the nature of the adjacent strata. The coal deposits of the eastern United States are the byproduct of swamps left undisturbed for many thousands of years such that biomass accumulated and eventually began coalification (Molinda, 2003). Molinda elaborates on the presence of these swamps and how their eventual coal thickness, middling properties and roof content is subject to the dynamic behavior of ancient river deltas (2003). This is due to the buildup of sediment within the delta network and how it forces a redirection of the distributaries of the river as it drains into a larger body. What was once a swamp gets covered by redirected river water and becomes a new depositional area for sands, silts, clays and other fine rocks which form the roofs, riders and floors of future coal mines (Molinda, 2003). The variety of mining conditions (both favorable and unfavorable) that these processes later induce prove to be the core of the subject of ground control. This variability in ground conditions highlights the concept that any good information on the geology of a coal seam and adjacent strata will be of great help to engineers charged with mitigating their hazards. A proper characterization of strata is widely believed to be one of the most crucial elements of an effective ground control strategy. Iannacchione and Zelenko speak on the importance of strata characterization in their 1995 work on coal mine pillar bumps by outlining the relationship between thick sandstone layers above and below the coal seam and the corresponding likelihood of violent pillar activity. It is probable that the presence of these massive layers would be detected through traditional methods of geostatistics namely analysis of 6
Virginia Tech
Coal Mine Roof Characteristics Generally, a robust sandstone layer in the roof is considered a favorable geologic feature as it can provide a good anchorage region for a bolt. This is especially true in shallower mines where having that massive sandstone layer wouldn’t subject the coal pillars to catastrophic burst failure from the excessively high geostatic stresses seen in deeper mines. However, there are accompanying negatives with having sandstone as a roof. Molinda and Mark describe in their 2010 work on ground failures in weak rock that an interface between sandstone and an underlying shale layer can be riddled with discontinuities and can lead to a frail and unfavorable roof. Although the strength properties of this interface may be difficult to quantify, the location of the sandstone-shale transition is crucial knowledge whoever decides the bolt anchorage depth. The benefits of knowing where sandstone layers are in the roof are obvious and a device that could locate them would better inform choices about roof control design. Mark and Molinda continue to describe negatives of sandstone by establishing that sandstone layers can serve as a vector for groundwater contained within an aquifer (2010). This sandstone can introduce water into the adjacent shales which are often sensitive to moisture and lead to a crumbly, problematic roof layer (Molinda and Mark, 2010). It was later suggested in that same work that the presence of a test hole may help to bleed the sandstone layer of the water and help slow the time-dependent, moisture-induced degradation of the underlying shaly rock. Again, the knowledge of the relative position and composition of these layers is important to correctly mitigating their complications. The ability to detect the presence of stackrock, thinly interbedded layers of shale and other friable rock layers, is key to catering a ground control plan to local mine areas. Methods of controlling these features are outlined in the 2008 Molinda, Mark, Pappas and Klemetti paper on 8
Virginia Tech
ground control issues in the Illinois basin. Generally, it is suggested that overlaying sandstone or limestone beds are to be sought in the roof strata around these stacked layers to serve as a strong anchorage horizon for the bolt (Molinda, Mark, Pappas, and Klemetti, 2008) (Molinda, Mark, 2010). In fact, Molinda, Mark, Pappas and Klemetti go on to suggest that in the presence of a thick limestone layer, a solid one foot minimum of resin anchorage should be rooted in the overlying limestone to give good suspension support to the weaker underlying layers (2008). With changing thicknesses of limestone and stackrock formations, it is clear that the capacity to make more detailed surveys on the position and dimension of these layers is advantageous. The benefit that would come from having a tool to travel up a test hole and make analyses about the roof composition is undeniable, especially considering the great number of test holes that can be accessed. The presence of rider seams in coal mines is another noteworthy geologic hazard. According to Molinda in the 2003 Geologic Hazards and Roof Stability in Coal Mines, rider seams are thin coal beds (6-48 inches) that overlay a thick, mineable seam. There is often a small formation of shale between the main coal seam and the rider seam, this interlying shale layer has a low formation strength of 28-40 for the CMRR index (Molinda, 2003). Rider seam thickness and position in the roof can be difficult to categorize and there can be a number of them in the immediate roof layer, further complicating any strategic plan for mapping them. If several adjacent bolts anchor within a rider seam, failure can occur because this seam loses structural integrity easier than other, more solid layers (Molinda, 2003). The wide array of dimensions for rider seams requires a systematic, consistent approach for monitoring. The most effective method for detecting and categorizing rider seams is regular test holes that cross the rider layers and allow the ground control expert to make conclusions about their location (Molinda, 2003). 9
Virginia Tech
The presence of clay in the roof matter is another of the principal hazards that changes ground integrity. Clays typically form in veins that intersect the seam and can have a dramatic impact on the solidarity of the roof, so much so, that they were the cause of 90% of ground falls at some mines in Pennsylvania and Illinois (Molinda, 2003). As is typically the case with these roofs, the suspension of the weak clay and shale from a sturdy limestone or sandstone beam of suitable thickness is essential to keeping the entry open (Molinda, 2003). There is a great need to collect and systematically process the thicknesses of the roof material in these areas, as having conclusive strata thicknesses and positions are necessary for the proper anchoring of the bolts and implementation of sufficient supplementary support (Mark, Molinda, and Burke, 2004). Due to the wide variety of roof conditions outlined in this ground control review, the need to determine their presence in a mine setting is deemed pressing. The fact that the modes of sedimentary rock deposition tend to manifest themselves in horizontal formations, is important because a vertical test hole would likely cross several different rock types between the collar and its deepest point. This provides a valuable opportunity to use a single analysis technique in a single hole that would establish interaction with several different rock layers and expose possible risks to miners that are not immediately visible to them. There are several different analysis techniques to explore that can indicate strength characteristics of these roof rocks. 10
Virginia Tech
Rock Analysis Methods The use of roof drilling parameters to back analyze the characteristics of rock is a promising technology that has the support from industry experts and academic researchers alike. This technology, while providing relevant information on roof strata composition, is beyond the nature of this study because it is purely within the operation cycle and bypasses the desired portability and free implementation of the roof strata analysis device to be constructed in this project. Using a roof bolter to drill new holes for collecting strata strength data would be ineffective at analyzing the roof strength characteristics of mine areas that may not have had any new roof bolts installed in a few years and a roof drill is unlikely to venture again. A portable roof analysis device would function well in a place where it would be economically unfeasible to bring a well-instrumented rock drill into the area to drill a handful of exploratory holes that would determine roof strength parameters. The process of categorizing roof conditions from drilling data has been attempted on several different occasions. The bulk of attempts have capitalized on Teale’s original calculations of drilling parameters and how they relate to the intrinsic specific energy (ε ) of the 10 rock (1964). The equation for specific energy obtained from drilling parameters can be seen below: 𝐹 2∗𝜋 𝑁𝑇 ε (𝑝𝑠𝑖) = ( ) + ( )( ) Equation (2.1) 10 𝐴 𝐴 𝑢 Where T (in.*lb) is the torque applied to the drill string, u is the penetration rate (in/sec), N is the rotational velocity (rev/sec), F is the penetration thrust (lbs), and A is the area of the hole (in.2) (Teale 1964). This equation is the sum of the constituent elements of a drill’s cutting mechanism, a rotational scraping and a thrust gouging. Taking these two elements into account, the amount 11
Virginia Tech
of energy require to remove rock material can be calculated, this same principle will be later discussed to describe energy used to excavate rock via scratch mechanisms. In recent years, several additional authors have used this information to try and incorporate it into the analysis of strata properties. Most recently, the application of systematic evaluation of drilling parameters (and therefore strata properties) as well as a suitable background on the evolution of research in the field can be found in the 2013 work by Bahrampour, et al.. Adapting the technique of using drilling parameters to analyze rock would be useful if it could be made portable and effective. This method was explored in 1996 by Reddish and Yasar wherein an ammeter was run in line with a hand drill that was attached to a drill mount to determine the electrical current applied to a motor for torque and rpm values. Useful parameters about drilled rock samples, namely intrinsic specific energy and therefore UCS, were obtained by standardizing the bit properties and the torque/rpm applied to the bit and by using a mount to keep a standard penetration pattern (Reddish and Yasar, 1996). This rock analysis process could be made relevant to ground control experts because they would be able to take hand size samples from the roof strata and extract the strength characteristics of the material. A further application of this test method would be to note the location of collected underground rocks and take them to a lab on the surface to have them analyzed and categorized, thus giving a location specific database of strength properties of certain roof layers. This systemic approach would also bypass the issues of rendering this hand drill safe for methane air mixtures as the drilling and analysis would take place outside of the mine environment. This test would be biased towards the shallowest roof layers (the skin layer) as it is the one most likely to be falling at any given time. Nonetheless, this idea still may prove useful for providing inputs to determine ground control techniques for controlling the behavior of the skin and immediate roof. 12
Virginia Tech
where L is the applied force in (in kN), and D is the tip penetration depth (in mm) (1998). These values are recorded throughout the experiment and when plotted on the same graph, stop at the point of first chipping in the sample. If no chipping occurred, then the reference becomes penetration depth at a 20 kN load or the associated load at the predetermined depth of 1 mm, using whichever condition that is reached first (Szwedzicki, 1998). The Indentation Hardness Index was correlated with uniaxial compressive strength (IHI) to yield an obvious trend, the trend can be expressed by the equation: 𝑈𝐶𝑆 = 3.1∗𝐼𝐻𝐼1.09 Equation (2.3) where UCS is in MPa (Szwedzicki, 1998). The simplicity of this test is one of its advantages, requiring few inputs and no extreme testing procedures or tools. Additionally the variability in its results are comparable to other rock strength test such as the UCS test and the Brazilian Tensile Test (Szwedzicki, 1998). It also reinforces the notion that UCS indices can be obtained from simple tests looking at forces and displacement within the rock. The final rock strength analysis method to be analyzed in this paper is the scratch test methodology developed by G. Schei and E. Fjᴁr SINTEF Petroleum Research based on work conducted by University of Minnesota professor Dr. Emmanuel Detournay. This sedimentary rock testing technique is predicated on continuously logging information about certain cutting parameters of a scratch bit that is equipped with precise kinematic and force controls and is dragged along the length of a core sample that is saddled in a housing (G. Schei et al., 2000). This type of testing, commonly referred to as “scratch testing”, shows great promise for providing inexpensive, quick and useful information about strength characteristics of sedimentary rock without the need to destroy the sample in the process (G. Schei et al., 2000). 14
Virginia Tech
The test works by compiling force and cutting depth values and assessing how they correspond to rock strength properties such as compressive strength and elastic modulus. Primarily, the test controls the depth of cut and the velocity at which the cutting head moves along the core sample (on the order of several mm/s) while monitoring the force applied to a cutting head (G. Schei et al., 2000). The depth of cut and velocity of the cutter are controlled electronically by a computer that sends user inputs about preferred depth and velocity to stepper motors that adjust these parameters. Schei et al. explain that the reason that cutting depth is controlled is that for shallow depths of cut, between 0.5 and 2 mm, the rock behaves in a ductile fashion along the leading edge of the cutting surface (2000). When the cutting depth is increased past this range, macro-scale rock failure behavior takes over and the rock begins to fail with larger, more sporadic failures in the form of chipping (Suarez-Rivera et al., 2002). It is in the ductile region that the scratch test is performed. The value of horizontal force used in the scratch test is essential for computing the value that actually correlates with uniaxial compressive strength, intrinsic specific energy (ε ). This 10 unit is the same as what was mentioned in R. Teale’s research and equals the amount of energy needed to remove a given rock volume, in this case by scratching it (Suarez-Rivera et al., 2002). The formula for this value as obtained from Suarez-Rivera’s 2002 paper is seen below: 𝐹 𝜀 (𝑝𝑠𝑖) = ℎ Equation (2.4) 10 𝑤∗𝑑 where F is the horizontal force average from the tested zone, the w is the cutter width and d is h the depth of cut (width of cut times depth of cut equals extracted rock area). Note that the units for Equation 2.4 are equal to that of pressure, which is the unit for intrinsic specific energy (Energy per unit volume) where one of the length dimensions from the energy and volume term 15
Virginia Tech
Overview of Current Rock Analysis Devices This scratching technology was brought to patent status by inventors Bertrand Peltier, Emmanuel Detournay (mentioned earlier in scratch test research), and Anthony Booer. In this patent, US 5,323,648 A, a tool for being lowered into a gas well borehole with scratching capability was described, featuring transducers for measuring forces and scratch depth. The scratchers suggested in the patent are made from Polycrystalline Diamond, and are imparted into the rock by an unspecified force generating element (US Patent No. 5, 323,648 A, 1994). No information regarding the successful development or implementation of this device in a field setting was found. The Formation Evaluation Tool patent was later referenced for the development of a laboratory core log analysis device for which the patent was awarded to Terratek inc. out of Salt Lake City, Utah. This device, patented under the title “Apparatus for Continuous Measurement of Heterogeneity of Geomaterials”, was invented by a team of individuals of which Roberto Suarez-Rivera (author of a technical paper on scratching referenced earlier in this paper) was a member. This device functions by traversing a scratching head under precise kinematic conditions and closely monitoring the resulting force and depth of cut being applied to the scratcher (US 8,234,912 B2, 2012). This device entered development and is used by Schlumberger to do scratch evaluations on cores that they logged for the development of oil and gas wells. An additional scratching device has already been patented for evaluating borehole wells in situ. The patent is held by Chee Phuat Tan of Kuala Lumpur, Malaysia under the assignment of Schlumberger Technology Corporation. This patent descriptions explain that the device was designed to function by making scratches in the rock mass with powered arms and measuring 17
Virginia Tech
Product Design Design Introduction The objectives of this design are for the device to identify changing layers of rock by interacting with the wall of a borehole. To take full advantage of all the information that may be contained in the borehole, it is important to try and extract information such as rock strength and changes in rock type that the previously explained tape measure/inspection hole test overlooks. Further, the use of outputs from this device to calculate strength values of the rock layers that it is interacting with is desired. The design of this device must abide by several constraints in order to be useful in the location in which it is expected to work. The first element is the size constraint, the device must fit in the legally required test holes which are commonly one inch in diameter. This prevents the complication of drilling another hole in the roof and opens up many old areas to roof analysis. It additionally is constrained to be safe to operate, portable and quick to assemble. It must function in a self-contained manner, based on forces that a user generates, and with no extra power systems being run to it and it would ideally be permissible for methane air mixtures. Given the size constraints on any device expected to fit in a test hole, a certain hierarchy was given to each possible rock analysis method so that the best process would be used. Issues of portability remained in the forefront of the design choice, but seeing as it is accomplished on account of the scale of the device’s operating conditions, namely the one inch hole, and the widespread availability of transport equipment in most mining settings, maintaining mobility was easy. The acknowledgement of the size of the borehole led to the conclusion that the more of the 19
Virginia Tech
device that could exist outside its very limiting size constraints, the better a design choice it would be. Immediately, issues arose when considering the use of the indentation analysis system namely because of the restrictions on size. When the experiments were conducted by Szewedzicki, the large hydraulic cylinder allowed precise displacement control normal to the rock face (1998). Furthermore, they had the potential to generate tens of kilonewtons of force and often had forces of that magnitude for Szewedzicki’s test (1998). This is prohibitive in the field setting because in order to increase the force imparted on the rock surface by hydraulic pressure, the hydraulic cylinder must either increase in area, or the indentation tip must be reduced in diameter. Noting that the hydraulic piston could never get larger than the borehole diameter of one inch (and even in the best conditions would still need to be considerably smaller than that), an enormous amount of hydraulic pressure, on the order of 20,000 psi, would need to be generated to get comparable forces (4000-8000 lbs) seen in Szewedzicki’s 1998 experiment. This would also force the user to have to maintain the hydraulic fluid levels in the system, not impossible, but adding an undesired level of complexity. Additional pitfalls with indentation testing include the safety aspect of working with high pressure fluids as well as the lack of precise pressure control of the hand pumps that would have to be used to generate the necessary pressures. The borehole size also restricts the ability to determine the amount that an indentation tip displaces into a rock surface, which is necessary to derive UCS by Szwedzicki’s methodology (2002). Precise linear displacement transducers that determine such movement are very expensive and one could not be found that could conceivably be expected to fit in a borehole in such a configuration that it would be able to measure relevant displacement. Especially 20
Virginia Tech
Scratch Head and Scratchers The size constraints indicated that a scratching mechanism may be the most useful rock analysis method for such a small area. It provides a useful, quick, nondestructive method for determining position-specific (and therefore strata-specific) mine strata characteristics. This categorization would be useful for analyzing the competency of the anchorage layer of roof bolts and cable bolts as the anchorage depth for these devices is regulated in a mine. Moreover, by having a continuous scratch log of the wall of the borehole, any serious discontinuities may manifest in the data, which would provide useful information about the jointing network in addition to the roof formation’s strength characteristics. The acknowledgement of the merits of the scratch analysis method require one to consider the way that a scratcher would be inserted into the borehole. Firstly, the scratcher has to constantly be applying force to the rock face that lodge the scratcher tips deep enough in the rock surface to enter the ductile rock failure phase of scratching. It is in this depth of rock scratching (0.5 to 2 mm) that Schei et al. explained that scratch tests are valid and that their equations explain rock failure (2000). Insertion of this mechanism into the rock mass would ideally be done in a manner where the scratch tips would be allowed to expand into the surrounding strata after insertion. In other words, this device works by an unobstructed insertion followed by expansion of the scratch heads and then a well instrumented, controlled removal of the resistive scratch head wherein the relevant scratching parameters would be monitored. It is under this basic design principle that the Mine Roof Strata Analysis Device (MRSAD) was created (in other literature about this project, the device is referred to as the In-Situ Technical Compression and Hardness Evaluation System, or ITCHES). 22
Virginia Tech
The scratch head is the pivotal element of the system and its proper design is essential for the device to function as the theory would require. The scratch head serves as the housing for the scratching mechanism as well as the element that translates forces generated by the pull of the user to the scratcher mechanism and then ultimately to the rock surface. The scratch head is a modified one inch diameter steel rod with a hole drilled the full diameter of the rod perpendicular to its long axis and a 5/8” diameter rod welded to the top to act as a wraparound for the tension cable. The material chosen for the head is stainless steel because of its hardness and resistance to oxidation, which was expected upon use in a moist environment. The following image, Figure 3.1, was taken of the scratch head, with a number three on it, displayed next to a one inch borehole: Side Scratcher Opening Scratcher Cavity 3 Figure 3.1: Scratch Head Immediately Prior to Insertion with Red Box around Side Scratcher Opening and Blue Box around Scratcher Cavity The hole that the scratcher tips comes out of is viewable just to the left of the installers thumb, highlighted by the red box. This hole, called the side scratcher opening, continues through the housing to the other side of the head. There is a cavity on the bottom of the head, outlined in the blue rectangle, which provides an area to aid in the installation of the scratcher and scratch head. 23
Virginia Tech
In the following image, Figure 3.4, this process is shown in a similar fashion to the one seen in previously in Figure 3.3: Force Figure 3.4: Retraction of Blue Scratcher Arms into Scratch Head Cavity from Downward Force Take note of the fact that the loop moves downward from an induced force between Figure 3.3 and Figure 3.4, indicating that tension is being put on the loop of the scratcher causing the tips to retract into the openings of the head. The scratcher loop seen in Figures 3.3 and 3.4 would be pulled down by hand by a loop of wire that runs out the length of the borehole, this added tension pulls the tips of the scratcher into the head housing and the head can then be inserted into the borehole without the resistance of the scratch heads against the strata. The scratchers are made from ASTM A228 stainless steel music wire of diameters 0.045”, 0.051” and 0.055”, as all were explored as possible sizes. Music wire was selected because of its hardness and resistance to fatigue, while still having the flexibility to undergo the 27
Virginia Tech
necessary deformation to be installed. The wire scratcher does scratch the sides of the borehole upon insertion as is visible in the following figure, Figure 3.5: Figure 3.5: Scratch Demonstration on Sandstone Sample This scratching demonstration was performed on a sample of sandstone in the lab. The lines coming down the side of the hole are from the scratchers housed in the head. No data was collected from this particular scratching test, it served simply to verify that there was the capacity to install the scratcher according to the method of pulling the scratcher loop into the cavity and inserting it in the borehole. The image above indicates this was done a number of times as the areas of rock removal are clearly outlined on the borehole wall. The scratcher is installed by way of a 0.029” pull wire that wraps around the loop at the base of the scratcher in the cavity, and applies tension to the scratcher, causing it to deform into the cavity at the base of the head. A user generated force that pulls on the scratch head is the process by which the scratch head is retracted from the hole and the primary means that energy gets imparted into the rock mass. The scratch head has a loop of steel cable that runs through the body over the top of the 28
Virginia Tech
threaded bolt, called a connecting bolt, which is welded to the cable. This bolt can be unscrewed to isolate the scratching head and corresponding tension cable as one unit. When the unit is fully installed, with the scratch tips extending into the borehole wall, a coupling at the connecting bolt will provide the linkage for the rest of the tension cable to extend the remainder of the way out of the hole. It is outside the hole where tension is applied by the user to move the scratch head through the rock mass. The interdependence of the borehole, the scratcher and the head that houses it is the principal design feature of the MRSAD testing unit. This means that the instrumentation choices were to be made after the analysis method was chosen as it was the core of the system design. When it was determined that scratch testing was going to be the method of rock analysis, the relevant parameters needed to be fully instrumented. It was known from the 2002 Suarez-Rivera et al. paper that the formula for intrinsic specific energy as a result of rock scratching had three inputs, horizontal scratching force in the numerator with scratch tip width (0.045”, 0.051” and 0.055” diameter scratchers are used in the design) and scratching depth being multiplied in the denominator (Equation 2.4 in this paper). This requires the proper instrumentation of the force in the direction of scratching. Based off the use of pull forces, the movement of the scratch head would directly correspond to the movement of the cable that it attaches to and therefore any pull force in the cable is also applied to the scratch head. It is also important to note that any force imparted on the head detected by instrumentation will be divided over its two points of contact with the borehole wall. 30
Virginia Tech
Instrumentation A load cell was constructed to relate the amount of force that is applied to the scratching head. This strain sensing element needs to be durable, accurate, and small enough to fit in the borehole with enough space to allow the presence of necessary signal wires and any additional installation devices. It was decided that strain gauges mounted to a specially designed element would serve well as a force transducer and would provide enough extra space to work with the size constraints. The design of a strain element is predicated around determining the expected values of strain beforehand based on predicted load on the element and the elastic properties (Young’s Modulus of Aluminum = 69 GPa) of the strained material. If the geometric dimensions of the strain element are known, then forces distributed over the area lead to calculable stresses. These stresses correspond to strain values by way of Hooke’s law and elastic moduli. When strain gauges are applied to strain element, their bond to the material can be rendered ineffective if they are overstrained. It is important to ensure that applied force values should be within a range that these damaging levels of strain are not reached. Strain gauges are analog devices which only change their resistance in from deformation due to forces, meaning that the smallest useful strain value is limited by the accuracy of the data acquisition system and the environmental noise. An I-shaped tension rod was designed to translate the tensile forces to the strain gauges. The rod is wider at the ends so that there is enough room for connecting bolts to screw into to the tension rod, the rod then thins in the middle to provide a flat surface to which the strain gauges are affixed. 31
Virginia Tech
the thickness of this tension element was designed to be wide enough to ensure easy application of the strain gauges but not so thin that it interferes with the durability of the tension member The configuration of the strain gauges was done at the recommendation of strain gauge design guides such as “The Strain Gauge” and “Strain Gauge Configuration Types”, both sources are web documents from leading instrumentation manufacturers that indicate the merits of a full Wheatstone bridge (their source information is available in the references section). This positions two of the four strain gauges parallel to the primary deformation direction and places two that run perpendicular to this deformation, but still in the same plane as the first set of gauges. This full-bridge configuration has the added benefit of automatic temperature compensation (“Strain Gauge Configuration Types”). The strain gauges and installation kit were purchased from Micro Measurements in Raleigh, North Carolina. The strain gauges were the 250BF-EA-13 model at 350Ω resistance each. The wires were connected with 134-AWP Solid Copper Wire included in the GAK-2-AE-10 installation kit, while 22 gauge AWG wire was used on the more exposed parts. The voltage signal coming off the gauges was sent via a 25 foot 4 channel, shielded, braided and jacketed cable (Model 426-BSV) also purchased through Micro Measurements. This cable was then wired to the I/O module on the data acquisition system through the five volt excitation port, ground port and +/- inputs. The tension force transducer had a number of features that protect it from some of the damaging circumstances that it was likely to experience in a borehole setting. Firstly, the profile of the wiring was kept as low as possible, this reduced the outer diameter of the tension member to reduce the likelihood that parts of it would snag on things in the hole or during installation. Secondly, the entire tension device was insulated with 5/8” flexible plastic tubing that was cut 34
Virginia Tech
and then taped to the exterior of the device. The protected strain gauge can be seen in the in Figure 3.9 below: Figure 3.9: Force Transducer in Protective Plastic Jacket the series of interwoven wires coming off the right of the protected load cell are the signal wires from the strain gauges. This covering protected the transducer from shock, abrasion and puncture and when that was wrapped in electrical tape, became very robust, while still being small enough to fit in the hole. The last design requirement was the use of connecting terminals between strain gauges. This provided a buffer between the different gauges that if there was any strain put on the signal wire that wasn’t absorbed by other preventative features such as the taped tubing, the stress didn’t manifest itself on the gauges themselves, which could damage them beyond what could be repaired in a field setting. The method for determining the position in the borehole, as well as observing the velocity of the head through the hole is to use a string displacement transducer, otherwise known 35
Virginia Tech
as an extensometer. The extensometer, otherwise known as a linear position transducer, chosen for this system is the Unimeasure HX-PA-300-L3M. Extensometers function by placing a variable resistor in connection with a rotating shaft, with the shaft affixed to a string whose position is changing. A voltage change across the resistor, corresponding to a change in position, can be measured by a data acquisition system with a voltage readout. The end of the string on the extensometer was to be attached to the scratching device somewhere just below the tension sensing member, that way any force applied by the winding spool on the extensometer would automatically be accounted for in the force on the scratchers. According to the Unimeasure, inc. datasheet for this device, the tension on the spool is 2.25 lbs. The range for the device is to be at least 20’, allowing for a seven foot mining roof height as well as a 13’ journey up the borehole. A steel wire extensometer made by Unimeasure was chosen that had a range of 25’, allowing for a buffer to prevent overdrawing the spool which would damage the device. The potentiometer in the transducer is a one kΩ, ten-turn resistor that has a linear taper and was attached to the winding shaft by way of a precision gear. The signal wire coming of the extensometer has a ten foot length, which was suitable to attach it to our data acquisition system (an image of the signal connection between an extensometer and our DAQ can be seen in Figure A.1 in the Appendix), while keeping more fragile electronics out of the way of the operating area. 36
Virginia Tech
Data Acquisition and Management The data acquisition unit used in this experiment is the National Instruments USB-6211 module. It features a USB bus port that permits configuration with National Instruments LabVIEW software to allow for easy data acquisition and management. The data acquisition system (DAQ) features an I/O module, on board five volt excitation for powering laboratory instruments, and has ports for analog as well as digital signals. There is no earth ground for this device, only a chassis ground, so all instrumentation cable shielding is grounded separately from the module via a cable that is hooked to a metal anchor. The ground circuit on the chassis simply provides a reference for the excitation voltage, without it, no current flows through the instrument. The device is not MSHA permissible, but it was assumed that the data acquisition system would function similar to one that was safe for methane air mixtures and that a future suitable data acquisition system could be substituted for this one. The USB-6211 runs off the USB device it is plugged into, making this project’s entire instrumentation system (the strain gauge transducer and extensometer) fully portable when combined with a charged laptop, giving hours of portable use. The LabVIEW software features a unique feature for assisting in making the DAQ communicate with the program. A module within the LabVIEW software, called DAQMX (or DAQ Assistant) contains all the elements to collect information and process it electronically. For this project, two channels were configured in DAQMX that allowed separate inputs both transducers while allowing them to run off the same five volt excitation source. 37
Virginia Tech
Mounting and Installation System A few methods of applying tensile force to the cable were explored. It was first thought that a constant-velocity electric motor would be used, but it was determined that this would be prohibitively expensive and difficult to implement/power in coal mines. Then the thought of a person simply pulling the device out of the hole by hand was considered, but the risk of injury and lack of ability to adequately control pull velocity rendered that idea unusable. This led to the adoption of using a winch, this would simultaneously handle the issues of organizing the tension cable as it came out of the borehole as well as providing a safer, more precise, tension generator that doesn’t rely on electricity. The winch selected for this device was a 1500 lb hand-cranked winch made by Torin Big Red Jacks. The spool had a selector that could do smooth, uninterrupted coiling or extending and it could do ratcheting coiling or extending so that no matter which direction was under load, hazardous and undesired slipping would not occur. A plastic drum was added to the original winding spool to increase the winding diameter to 3.45”, this would make the winding process faster which is important considering how long it would take to hand wind the 30’ attached to the winch. The increased wind up rate is due to the fact that each revolution had a larger circumference around which the cable wound, but this also put the steel cable under less stress to wind around the shaft which is safer, more organized and prolongs the life of the cable. It was also determined that the same 1/8” cable used for the scratch head would also be used for the rest of the tension cable. In order to ensure a controlled insertion of the signal wires, tension cables, scratch head, scratchers and force transducer, an installation rod was designed. This rod is simply a piece of 4’ long conduit pipe of around 3/4” with a slot cut the entire length. The top one foot of the device 39
Virginia Tech
features the majority of the pipe cut away to make a small pushing element. The idea behind this design is that all the wires will go through the slot into the center part of the pipe, protecting them from pinch points in the hole that could damage them. The part at the top that is cut away serves as a platform for the tension transducer (the largest diameter item in the borehole) and gives a tip with which the scratcher can be pushed up the hole. This device can be seen in Figures A.7 and A.8 of the Appendix. Upon full installation of the system in the borehole, the pipe could be removed without pulling any of the elements, all the while the cable would be fed through the slot cut through the side. This would leave all the cable in the hole without cutting it or damaging it while also removing the installation rod. It was determined that for reasons of stability and consistency, that the entire MRSAD system would be affixed to a post that would brace itself against the roof and floor of the mine. This would reduce the problem of any tension (applied to the cable to remove the scratchers) manifesting itself in ways that would lead to motion of whatever was applying the tension either a person, winch or motor. Considering mines may require testing in low, and higher coal situations, this element was required to have a good deal of versatility in what variety of roof heights in which it could operate. There are several advantages of using the stand that made it well suited for its purpose. It is simple in construction and quite robust meaning it can be assembled and dissassembled quickly and easily (on the order of one minute) and it won’t get damaged from being stored in a container. The stand is comprised of a four-post base that provides stability from it tipping over, a series of fitted middle sections and a screw top for fine adjustments. The stand provides a platform to affix other elements of the MRSAD device. The winch and displacement transducer were all attached to the stand, keeping them from moving under the tension put on them, as well 40
Virginia Tech
Upon conclusion of the construction of the MRSAD, a testing facility was identified that possessed resources that would be suitable to evaluate some of the features of the device and to see if the theory behind the remote field scratch test were valid. The people at the research facility indicated that they were not able to put a rock sample in the roof mount as was planned originally and that modifications would have to be made to the MRSAD unit in order that it retain its function in the horizontal direction. This required slight modification of a few of the elements as well as relying on some useful coincidences with previous design choices. The primary concern was the need to change the winch and extensometer cable direction from vertical to horizontal. Due to the way that the winch was designed, with several crossbars across the body, the pulley and extensometer wire were simply snaked around one of these crossbars and allowed them to function sideways. In order to compensate for friction on the cables as well as any damage that may be induced from wrapping cables around small diameter cylinders, a larger pipe was fitted to one of the crossbar elements and seemed to serve very suitably for stress relief. Additionally, the MRSAD tower has the ability to brace against a rock sample horizontally. For this change in orientation, a series of three inch by three inch by two inch pieces of wood were arranged to brace against the base and keep it from sliding toward the direction of the tensile force. In a prior iteration of the design cycle of the MRSAD brace, a 1/4" threaded bolt was put through the frame, an additional piece of wood was drilled with a hole the diameter of that bolt and this served as a mount for this element. 44
Virginia Tech
Calibration Expected values for tension to remove the scratch unit from its installation were on the order of 20 to 100 lbs. The tension values were estimated by considering the amount of force that a person could generate with their own strength, as the device was constrained to not be externally powered. The first calibration was done with 0, 20.325, 40.325, and 47.110 lbs, which were the order of magnitude of the forces that a person could generate to pull on the device. Calibration of the force device was predicated on taking 1,000,000 samples from the strain transducer as weights of increasing mass were applied to induce tension on the load cell. This sampling was done for four different weights (including zero) three times each. The gauge output can be seen in Figure 3.14 below: Voltage vs. Applied Tension y = 5E-07x + 7E-0 R² = 0.9881 0.00003 0.000025 s t lo 0.00002 V ,t u0.000015 p t u O 0.00001 e g a t0.000005 lo V 0 -0.000005 0 5 10 15 20 25 30 35 40 45 50 Applied Tension, lbs Figure 3.14: Voltage vs. Applied Tension with Trend Line, Equation and Coefficient of Determination The trend line and coefficient of determination are also visible and indicate a linear voltage change with changing force. There was an error with the data reader that was not recognized 46
Virginia Tech
until after calibration that cause there to be only one data collection test instead of three for the zero force point. This means the variance for this force value is not as well-known as it is for the other force values. The results for the first calibration, while indicative of the linear deformation of the strain element, did little to indicate the variability of force readings at smaller force changes. This led to further calibration efforts that focused on smaller scale forces (0, 0.506, 1.016, 3.016, and 5.25 lbs). When the device output was analyzed under the smaller scale loads later seen in the tests, the following output was obtained, visible in Figure 3.15: Voltage vs. Applied Tension y = 8E-07x + 1E-06 R² = 0.5996 0.000006 0.000005 0.000004 s t lo V 0.000003 ,t u 0.000002 p t u O 0.000001 e g a 0 t lo V -0.000001 -0.000002 -0.000003 0 1 2 3 4 5 6 Applied Tension, lbs Figure 3.15: Voltage vs. Applied Tension with Trend Line, Equation and Coefficient of Determination Note the trend line and coefficient of determination, these values are not the same as in the original calibration. Additionally, the variation of the voltage at the zero force level has a dramatically different variance than the rest of the values. If this zero-force voltage output is neglected and the graph properties recalculated, the coefficient of determination returns to its 47
Virginia Tech
previous level of precision. This omission of the zero-force could be justified on the grounds that the tension sensing member would never actually be measuring forces at the zero-load level because at the very least, it will have the tension of the extensometer applied to it. Additionally, when the full device comes out of the hole, the scratch head will still be attached to the tension transducer and will continue to apply tension as it dangles from the MRSAD unit. The modified graph can be seen in the next image, Figure 3.16: Voltage vs. Applied Tension y = 4E-07x -9E-08 R² = 0.9893 0.0000025 0.000002 s t lo 0.0000015 v ,e g a t 0.000001 lo V 0.0000005 0 0 1 2 3 4 5 6 Force, lbs Figure 3.16: Voltage vs. Applied Tension - Omitting Zero-Force Values - with Trend Line, Equation and Coefficient of Determination Take note of the graph returning to an almost 99% linear fit among the data. There are differences between the first graph, which calibrates weight between zero and about 45 lbs and the last one which does zero to five lbs. The intercepts can be ignored because they serve only to shift the outputs up and down. If the zero force value for the graph is set based off a low point on the graph, then the intercept is not important because the force values change linearly. Since the plots from the data acquisition system reference the zero force values computed from the graphs themselves, the intercepts are ignored. The equation from Figure 3.14 is the most justifiable for 48
Virginia Tech
determining the force exerted on the tension member and is rearranged to relate force applied to the load cell in the following equation, Equation 3.1: 𝑇𝑒𝑛𝑠𝑖𝑜𝑛 𝐹𝑜𝑟𝑐𝑒 (𝑙𝑏𝑠) = 𝑉𝑜𝑙𝑡𝑎𝑔𝑒∗2,500,000 Equation (3.1) note that the voltage input is the voltage from the graph. Equation 3.1 converts the force voltage values from the data to force values. The data were recorded with a scaling factor included, so if further processing of the results from the original experiments is desired, the above value of 2,500,000 should be reduced to 50,000. The same calibration principles were applied to the displacement transducer. The string on the extensometer was drawn out to predetermined lengths and held for 1,000,000 samples, done three times for each length. These samples were averaged and then all three tests plotted on a graph which can be seen in Figure 3.17: Voltage vs. Displacement y = 0.1802x -0.0419 R² = 0.9894 0.6 s lt o V ,t 0.5 u p t u 0.4 O e g a t 0.3 lo V t n 0.2 e m e c a 0.1 lp s iD 0 0 0.5 1 1.5 2 2.5 3 3.5 Displacement, ft Figure 3.17: Voltage vs. Displacement with Trend Line, Equation and Coefficient of Determination 49
Virginia Tech
Design Summary The components for the device consist of a stand, a hand cranked pulling winch, a position transducer, a load cell, a scratch head, an installation rod and scratchers. These parts, and the parts that they are made from are categorized in the bill of materials. The bill of materials for the device is included in the following table, Table 3-I: Table 3-I: Bill of Materials for Formation Evaluation Tool Tier Component Unit Number Make/Buy Mine Roof Strata Analysis Device (MRSAD or 1 ITCHES) Tool 1 Make 2 Mount Stand 1 Make Interchangable 3 Metal Assembly Stand 1 Buy Adjustable Roof 3 Threaded Bolt Bolt 1 Buy 4 Brace Plate Plate 1 Make 4 Bolt Nuts Nut 3 Buy 1" thick Plastic Mount for Extensometer and 3 Winch Sheet 2 Buy 4 Bolt Bolt 2 Buy 4 Nut Nut 2 Buy 2 Hand Winch Winch 1 Buy 3 Mounting Bolts Bolt 3 Buy 3 Spool Spool 1 Make 4 3" Plastic Cylinder Plastic 1 Buy 3 3/16" Steel Cable Feet 35 Buy 4 Connecting Bolts Bolt 4 Buy 4 Crimps Device 3 Unimeasure HX- PA-300-L3M Position 2 Transducer Device 1 Buy 3 Signal Wire Feet 10 Buy 2 Load Cell Device 1 Make 52
Virginia Tech
350 Ohm 250BF- EA-13 Strain 3 Gauge Gauge 4 Buy 134-AWP Signal 3 Wire Feet 10 Buy 426-BSV Braided Shielded Signal 3 Wire Feet 30 Buy 5/8" Diameter 3 Aluminum Rod Feet 1 Buy GAK-2-AE-10 Strain Gauge 3 Installation Kit Kit 1 Buy Plastic Tube for 3 Protection Feet 1 Buy 2 Installation Rod Device 1 Make 3/4" Conduit Pipe 3 5' long Pipe 1 2 Scratch Head Device 1 Make Stainless Steel 3 Rod 1" Feet 1 Make Stainless Steel 3 Rod 5/8" Feet 1 Make 2 Scratchers Make 0.045" Music 3 Wire Spool 1 Buy 0.051" Music 3 Wire Spool 1 Buy 0.055" Music 3 Wire Spool 1 Buy USB 6211 Data Acquisition 2 System Unit 2 Buy The table works on a tier system, the most encompassing aspect of the project is labeled tier one and in this case is the Mine Roof Strata Analysis Device. Components in tier two assemble to form tier one items and similarly, tier three assembles into tier two and finally tier four assembles into tier three. This allows for easy categorization about what is needed for the device and how the parts come together. The unit section dictates how the product is sold with number 53
Virginia Tech
Experiment Concrete Block Scratch Test The testing of the device took place at a facility that researches the design of mine roof drilling equipment. This was selected because of its close proximity to Virginia Tech as well as the availability of pre-drilled 10,000 psi concrete blocks with river gravel that had one inch blind holes that are drilled four feet into the rock. A picture of the full scale concrete block can be seen in Figure A.2 in the Appendix. These blocks came pre-installed with two different rock types that were an artifact of prior research. The presence of these differing rocks in the block matrix was desired because it means that the MRSAD can look for differing rock types in the block material in a manner that is similar to analyzing the varying strata in a coal mine roof. This goal of this concrete block test was to control the velocity of the scratch head in the borehole and try to see if the force varied with any regularity as a result of scratching and motion. It was thought that an increase in force over sections of the block would indicate a stronger or more resistive rock type and that the presence of decreases in force could indicate weaker rock types. Controlling the velocity of the scratch head for each test with steady winding of the winch allows the results of one hole’s similar velocities to be compared. In essence, if the velocity of each scratcher pull test is held constant, the forces seen in each individual attempt can be compared to other forces in the same attempt because it moved at a constant velocity through the hole. The blocks were taken into the research warehouse and the pre-drilled holes were then analyzed to see if they would be able to serve as good hosts for the scratch head by inserting the scratch head into each of the holes on the face. Immediately, the geometry of the holes became problematic because it was difficult to insert the device more than a few inches into the block. 55
Virginia Tech
There were aspects of the initial drilling of the holes that made them have an undulating profile along their length which was difficult for the scratch head to move past. However, observed holes where at least a foot of depth could be reached were considered promising, and were marked to be used for testing of the MRSAD. The following image, Figure 4.1, shows the scratch head being installed with the horizontal installation rod pushing the scratcher into the borehole: Figure 4.1: Installation of the Scratcher into the Borehole The MRSAD unit is in the bottom right of the image, it is moved away for the installation to allow more room to work, and then placed back up against the block for the test. The test holes and their markings can be seen in Figure A.3 and Figure A.5 in the Appendix. During the first two attempts at installation, the device did not install correctly and it was decided that modifications to the scratch head would help with the process. The facility assistant 56
Virginia Tech
This graph has a distinct increase in force during the motion phase which is indicative of resistance from the rock. The velocity for the first motion region between 28 and 51 seconds is 0.045 ft/sec (0.54 in./sec) with an R2 of 0.98 for that velocity for that interval. The velocity of the second motion phase, between 51 and 82 seconds is 0.041 ft./sec. (0.49 in./sec.) with an R2 of 0.999 for the velocity over that time window. The region between 40 and 80 seconds shows an applied force value that although spiking, hovers around the same value while the displacement changes constantly over the course of the hole. This motion corresponds to two velocity regions the first one between about 40 seconds and 50 seconds and then the second, slower one between 50 and 80 seconds. The velocity of the scratch head is dictated by the user carefully cranking the winch handle and when it is under operation, keeping a consistent, steady pace is needed to look for changes in the rock type along constant velocity sections. 58
Virginia Tech
It was decided at this point to resume testing in a different hole, hole two, adjacent to hole one. The data readout for this test can be seen in the subsequent image, Figure 4.4: Figure 4.4: Force and Displacement vs. Time for Third Test in Concrete Block This test begins with a very distinct spike in force at 15 seconds, this is showing the amount of force it took to initiate motion of the scratch head in the borehole because at that point, the displacement profile is flat, indicating no motion is occurring. The velocity of the first phase of motion between 15 and 20 seconds is 0.11 ft./sec. (1.36 in./sec.) with and R2 of 0.989 for that velocity over that interval. The velocity seen in the second part of the motion phase between 21 and 41 seconds is 0.055 ft./sec. (0.66 in./sec.) with an R2 of 0.996 over that interval for that average velocity value. It is often the case that the head will lodge itself in the rock with the 60
Virginia Tech
The pull forces for the successful tests (tests one through three) were averaged over the motion phases and arranged in the following table, Table 4-I: Table 4-I: Concrete Block Test Pull Force Averages and Statistics Scratcher Concrete Block Diameter Pull Force Averages (lbs) Test Statistics Overall Statistics (Inches) Test 1 Test 2 Test 3 Test 1 Stdev Test 2 Stdev Test 3 StDev Average Std. Dev. 0.045 4.4795 3.85 10.23 3.49 3.87 10.818 6.19 2.87 The averages were taken in areas where displacement was occurring by highlighting the dataset with the data selection tool in the MATLAB plot function, making it a variable, and finding its average and standard deviation. The test statistics in the middle are the amount that the pull forces varied as it traveled the length of the hole for each test. The overall statistics on the right are the averages and standard deviations for the three test’s pull force averages. The results of this test were indicative of the successful application of resistive forces to the scratch head and borehole as well as a demonstration of the instrumentation and installation tools. However, there was a variability in the test circumstances especially in controlling variables such as rock composition, scratcher dimensions and anchorage characteristics which meant a lack of definitive conclusions. The failure of the mechanism and installation was important because it illuminated weak points in the design. The data readouts from the failed tests can be seen in Figure A.25 and Figure A.26 in the Appendix. The decision was made to reapply the MRSAD device to better regulated circumstances in the lab which entailed testing it a one inch inner diameter PVC pipe and a small sandstone block to serve as more homogenous controls. 63
Virginia Tech
PVC Pipe Scratch Test The PVC Test serves as a platform for comparing differing wire scratcher diameters in a more controlled material, in this case PVC plastic. Two wire scratcher diameters, 0.045 inches and 0.051 inches were used in this experiment. The test paired MRSAD unit with a five foot long PVC pipe. The pipe was clamped down to a table and then the stand for MRSAD was placed next to the table with horizontal braces installed. The pipe setup can be seen in Figure 4.6 below: Figure 4.6: Clamped PVC Pipe with Scratch Head Visible Emerging from Pipe This image was taken after the scratcher had been set and shows the direction that the head moves in the pipe as it is drawn toward the far hole. The plastic pipe was longer than the installation rod used for the concrete block test and this meant that that installation method would no longer be feasible. Instead, the scratch head was dropped down the pipe with the other 64
Virginia Tech
cables attached to it having gravity pull it down. The scratcher was installed when the head emerged from the other side as seen in the following image, Figure 4.7: Figure 4.7: Scratch Head Embedded in PVC Pipe Walls Prior to Extraction There is a small gap between the head and the PVC pipe, it is thought that this gap is one of the most critical elements of the test. If this annulus is too small, the head is more likely to get stuck in the hole. If it is too wide, the thin scratchers will get bent into the gap and not engage in the type of scratching behavior desired. Being able to see the scratcher from the other side of the hole removed a lot of the uncertainty about anchorage behavior because it allowed for a close look from the other open end of the PVC pipe where it could be easily determined that the scratch tips had contact with the plastic. 65
Virginia Tech
the hanging wire. This ensured that the cranking could commence as soon as the data acquisition program was initiated, instead of cutting in to the data logging time as the spool was wound and reorganized. When the initial force is the same as the traveling force, it is a byproduct of this testing procedure. It was decided that for the third test, the scratch tips would be changed from the 0.045 inch diameter tips to the 0.051 inch diameter tips. With the homogenous nature of the plastic pipe, more control could be expected for the behavior of the surrounding matrix and the consequences of changing the diameter of the scratcher wire could be evaluated. A picture showing prepared scratch tips laying on a 0.196 inch x 0.196 inch grid is visible in Figure 4.11: 0.051” 0.051” 1” 0.045” 0.045” Figure 4.11: Prepared 0.051” and 0.045” Scratchers with One Inch Parallel Lines Simulating Borehole Dimensions All the scratchers are the same width and all extend 1.1811 inches from tip to tip, which is 0.091 inches wider on each side than a one inch borehole. The tips of the wire scratchers were filed 69
Virginia Tech
after being cut to attempt to control the profile of the scratcher surface. The fact that they are the same dimensions was important to consider for control as having them at the same cut width, 1.1811 inches, kept ruled out the width of scratchers as a variable. Once installed, the pull test was performed on the PVC pipe with the new 0.051” scratchers. Scratching commenced and immediately more resistance was observed, which at this point felt like gouging of the plastic. The clamps were losing their hold and the pipe had to be held by two people as the head was drawn out. Again, the time was cut short, with the output for the first part of this test seen in Figure 4.12 below: Figure 4.12: Force and Displacement vs. Time for First Part of First PVC Test with 0.051” Scratcher Although the force applied to it is highly erratic, the velocity of the scratcher stays largely the same (R2 for velocity of 0.99) throughout the first part of this test at 0.037 ft./sec. (0.44 in./sec.). 70
Virginia Tech
Take note that the large magnitude tensile forces correspond strongly with the motion through the PVC between 10 and 60 seconds. The velocity of the motion over the interval between 12 and 60 seconds is 0.043 ft./sec. (0.516 in./sec.) with an R2 of 0.998. Table 4-II is shows the averages of the forces for the differing scratch tip diameters in the PVC tests and can be seen below: Table 4-II: PVC Test Pull Force Averages and Statistics for Differing Scratcher Sizes Scratcher PVC Pipe Diameter Pull Force Averages (lbs) Test Statistics Overall Statistics (Inches) Test 1.1 Test 1.2 Test 2 Test 1.1 Std. Dev. Test 1.2 Std. Dev. Average St. Dev. 0.045 2.48 3.23 2.71 0.48 0.69 2.81 0.38 0.051 39.27 33.65 n/a 7.35 7.69 36.46 3.97 This table highlights the difference in force between the PVC tests. The test showed a large change in magnitude of pull force from an increase in scratch tip diameter, 2.81 lbs as opposed to a 36.46 lbs average. The test statistics section in the middle refers to how much the forces were varying during the testing phase for test one for each scratcher type. The test with the 0.051” scratch diameter had more force variability over its motion (7.35 & 7.69 lbs) than the 0.045” scratch diameter (0.4771 & 0.69 lbs). The overall statistics to the right are the averages and standard deviations of the pull force averages in the left section of the table. When the scratch head was removed from the hole, it left interesting marks on the inside of the pipe. A pairing of deep cuts corresponding to the diametrically opposed scratch tips traversed the entire length of the pipe’s inner surface. The marks from the first two tests (0.045” scratcher diameter) were almost indistinguishable compared to the profile of the pipe, but the third (0.051” scratcher diameter) left cuts on the pipe. 72
Virginia Tech
Sandstone Scratch Test 0.045”, 0.051”, and 0.055” wire scratchers were then used on a new test, one looking at their interaction with a small sandstone sample. This was to mimic the control of the PVC test but while still testing the device on rock. The holes after being tested can be seen in the following image, Figure 4.15: 0.045” 0.051” 0.055” Figure 4.15: Sandstone Holes after Testing the 0.045”, 0.051” and 0.055” Wire Scratchers Markings are visible on the rock surface resulting from contact with the scratchers. The sample was held to the table by a series of clamps and was carefully monitored to ensure that it was not undergoing slippage during the duration of the test. This rock was a suitable length to obtain a velocity profile and to measure the associated pull force as can be seen in the next several images. The plots of time, displacement and force were recorded and graphed for each run of the tests. The results for the first test from every scratch diameter are displayed below with the remainder of the graphs being available in the Appendix in Figures A.9 through A.24. 74